
As a parent, you want your child to surround themselves with good influences. That’s true not only for who they spend time with in real life, but also for the people and ideas they’re exposed to on social media.
If you or your child are concerned about the content appearing in their feed, one beneficial step you can take is to help them reset their social media algorithm. Here’s how to reset your child’s algorithm on TikTok, Instagram, and other platforms.
Social media algorithms are the complex computations that operate behind the scenes of every social media platform to determine what each user sees.
Everything on your child’s social media feed is likely the result of something they liked, commented on, or shared. (For a more comprehensive explanation, check out our Parent’s Guide to Social Media Algorithms.)
Social media algorithms have a snowball effect. For example, if your child “likes” a cute dog video, they’ll likely see more of that type of content. However, if they search for topics like violence, adult material, or conspiracy theories, their feed can quickly be overwhelmed with negative content.
Therefore, it’s vital that parents actively examine and reset their child’s algorithm when needed, and also teach them the skills to evaluate it for themselves.
Research clearly demonstrates the potentially negative impacts of social media on tweens and teens. How it affects your child depends a lot on what’s in their feed. And what’s in their feed has everything to do with algorithms.
Helping your child reset their algorithm is a wonderful opportunity to teach them digital literacy. Explain to them why it’s important to think critically about what they see on social media, and what they do on the site influences the content they’re shown.
Here are some steps you can take together to clean up their feed:
Resetting all of your child’s algorithms in one fell swoop can be daunting. Instead, pick the app they use the most and tackle that first.
If your kiddo follows a lot of accounts, you might need to break this step into multiple sessions. Pause on each account they follow and have them consider these questions:
If the answer “yes” to any of these questions, suggest they unfollow the account. If they’re hesitant — for example, if they’re worried unfollowing might cause friend problems — they can instead “hide” or “mute” the account so they don’t see those posts in their feed.
On the flip side, encourage your child to interact with accounts that make them feel good about themselves and portray positive messages. Liking, commenting, and sharing content that lifts them up will have a ripple effect on the rest of their feed.
After you’ve gone through their feed, show your child how to examine their settings. This mostly influences sponsored content, but considering the problematic history of advertisers marketing to children on social media, it’s wise to take a look.
Every social media app has slightly different options for how much control users have over their algorithm. Here's what you should know about resetting the algorithm on popular apps your child might use.
To get the best buy-in and help your child form positive long-term content consumption habits, it’s best to let them take the lead in deciding what accounts and content they want to see.
At the same time, kids shouldn't have to navigate the internet on their own. Social platforms can easily suggest content and profiles that your child isn't ready to see. A social media monitoring app, such as BrightCanary, can alert you if your child encounters something concerning.
Here are a few warning signs you should watch out for as you review your child's feed:
If you spot any of this content, it’s time for a longer conversation to assess your child’s safety. You may decide it’s appropriate to insist they unfollow a particular account. And if what you see on your child’s feed makes you concerned for their mental health or worried they may harm themselves or others, consider reaching out to a professional.
Algorithms are the force that drives everything your child sees on social media and can quickly cause their feed to be overtaken by negative content. Regularly reviewing your child’s feed with them and teaching them skills to control their algorithm will help keep their feed positive and minimize some of the negative impacts of social media.

Just by existing as a person in 2023, you’ve probably heard of social media algorithms. But what are algorithms? How do social media algorithms work? And why should parents care?
At BrightCanary, we’re all about giving parents the tools and information they need to take a proactive role in their children’s digital life. So, we’ve created this guide to help you understand what social media algorithms are, how they impact your child, and what you can do about it.
Social media algorithms are complex sets of rules and calculations used by platforms to prioritize the content that users see in their feeds. Each social network uses different algorithms. The algorithm on TikTok is different from the one on YouTube.
In short, algorithms dictate what you see when you use social media and in what order.
Back in the Wild Wild West days of social media, you would see all of the posts from everyone you were friends with or followed, presented in chronological order.
But as more users flocked to social media and the amount of content ballooned, platforms started introducing algorithms to filter through the piles of content and deliver relevant and interesting content to keep their users engaged. The goal is to get users hooked and keep them coming back for more.
Algorithms are also hugely beneficial for generating advertising revenue for platforms because they help target sponsored content.
Each platform uses its own mix of factors, but here are some examples of what influences social media algorithms:
Most social media sites heavily prioritize showing users content from people they’re connected with on the platform.
TikTok is unique because it emphasizes showing users new content based on their interests, which means you typically won’t see posts from people you follow on your TikTok feed.
With the exception of TikTok, if you interact frequently with a particular user, you’re more likely to see their content in your feed.
The algorithms on TikTok, Instagram Reels, and Instagram Explore prioritize showing you new content based on the type of posts and videos you engage with. For example, the more cute cat videos you watch, the more cute cat videos you’ll be shown.
YouTube looks at the creators you interact with, your watch history, and the type of content you view to determine suggested videos.
The more likes, shares, and comments a post gets, the more likely it is to be shown to other users. This momentum is the snowball effect that causes posts to go viral.
There are ways social media algorithms can benefit your child, such as creating a personalized experience and helping them discover new things related to their interests. But the drawbacks are also notable — and potentially concerning.
Since social media algorithms show users more of what they seem to like, your child's feed might quickly become overwhelmed with negative content. Clicking a post out of curiosity or naivety, such as one promoting a conspiracy theory, can inadvertently expose your child to more such content. What may begin as innocent exploration could gradually influence their beliefs.
Experts frequently cite “thinspo” (short for “thinspiration”), a social media topic that aims to promote unhealthy body goals and disordered eating habits, as another algorithmic concern.
Even though most platforms ban content encouraging eating disorders, users often bypass filters using creative hashtags and abbreviations. If your child clicks on a thinspo post, they may continue to be served content that promotes eating disorders.
Although social media algorithms are something to monitor, the good news is that parents can help minimize the negative impacts on their child.
Here are some tips:
It’s a good idea to monitor what the algorithm is showing your child so you can spot any concerning trends. Regularly sit down with them to look at their feed together.
You can also use a parental monitoring service to alert you if your child consumes alarming content. BrightCanary is an app that continuously monitors your child’s social media activity and flags any concerning content, such as photos that promote self-harm or violent videos — so you can step in and talk about it.
Keep up on concerning social media trends, such as popular conspiracy theories and internet challenges, so you can spot warning signs in your child’s feed.
Talk to your child about who they follow and how those accounts make them feel. Encourage them to think critically about the content they consume and to disengage if something makes them feel bad.
Algorithms influence what content your child sees when they use social media. Parents need to be aware of the potentially harmful impacts this can have on their child and take an active role in combating the negative effects.
Stay in the know about the latest digital parenting news and trends by subscribing to our weekly newsletter.

As a Boy Mom, writing about looksmaxxing is personal. My tween currently prioritizes comfort (nothing but sweatpants!) and convenience (haircuts are a chore!) over aesthetics. But as he speeds toward middle school, I know that’s bound to change.
Just as my generation of girls absorbed harmful pressure about looks from magazines like Seventeen and CosmoGirl, social media has burst the protective bubble boys once existed inside. Adolescent males are now inundated with toxic ideals — and looksmaxxing is one of the more alarming ones. It’s enough to convince me to delay social media for my son as long as possible.
Here’s everything you need to know about looksmaxxing.
Looksmaxxing is a manosphere-adjacent subculture glorifying hypermasculine male bodies and promoting (frequently dubious) methods to maximize physical appearance.
“Scores” are assigned to physical aspects, often in pursuit of biologically impossible standards. Some of the physical characteristics sought after in looksmaxxing include:
Looksmaxxers will go to varying lengths to achieve their goals, from routine hygiene and fitness (softmaxxing) to invasive and potentially dangerous tactics (hardmaxxing).
Looksmaxxing is entirely a byproduct of internet culture. It originated on incel message boards in the 2010s and was popularized in the 2020s as part of the broader TikTok “glow-up” trend.
Now, dangerous social media algorithms feed looksmaxxing content to young males, and forums such as Reddit and Discord give young men the means to dive deep into the weeds.
In my opinion, here are the top three looksmaxxing dangers you need to know:
Many looksmaxxing techniques overlap with disordered eating behaviors, particularly rigidity around numbers. However, these behaviors are often masked by an atypical presentation, such as muscle bulking and over-exercising.
Not coincidentally, the past two decades have seen a 416% increase in boys hospitalized for eating disorders.
Looksmaxxing promotes a narrow, misogynistic picture of masculinity, telling males they must take extreme measures to make themselves attractive to women before they’re considered a “real man.”
Looksmaxxing rating systems are built on Eurocentric ideals, and many traits coveted by the movement, like fair skin and symmetrical features, are associated with whiteness.
Raters disguise racist sentiments as pseudo-science by using technical terms to describe traits commonly associated with non-white people, like “recessed palate” or “protruding upper third.”
Since my daughter was very young, I’ve worked to counterprogram societal messaging that her worth is in her looks. Looksmaxxing is a good reminder that our boys need this, too. Be overt and make it an ongoing conversation.
To offset the Eurocentric, ableist, fatphobic ideals looksmaxxing promotes, expose your son to racially and physically diverse people, in media and in real life. Help him identify successful men who don’t fit traditional male beauty standards.
Involving male figures in the conversation will help him feel less alone. Ask the men in his life to tell him about their experiences with body image growing up.
Get that toxic nonsense off his feed! Show your son how to regularly reset his social media and YouTube algorithms for a healthier online experience.
Keep an eye on what your son does and says online so you can watch for looksmaxxing red flags. BrightCanary can help by monitoring everything your son types across all platforms and watches on YouTube.
If your son shows signs of mental health issues or disordered eating, seek professional help right away. Here are some resources:
Looksmaxxing is a dangerous social media trend promoting toxic and unhealthy standards for men. To protect your son, intentionally counterprogram looksmaxxing messaging, monitor his online activity, and seek professional help if needed.
BrightCanary helps you monitor your child’s activity on the apps they use the most and alerts you to anything concerning, like messages about body image. You’ll also get insights into your child’s online activity and learn if they’re engaging with content about the manosphere and looksmaxxing. Download today to get started for free.

Bullying. Self-image. Online strangers. AI. Most parents wait until something goes wrong before having a conversation. After all, these topics can feel overwhelming or premature, especially if your child seems fine. But by the time there’s a problem, it’s already harder to talk.
The truth is simple: the earlier you start these conversations, the safer and more prepared your child will be.
There’s often a gap between what kids say and what’s actually happening on their devices. Not because they’re being dishonest, but because they don’t always connect their online behavior to real-world risks.
That’s where this guide comes in. Developed in partnership with Lisa Smith, the Peaceful Parent, it gives you the exact words to start meaningful, low-pressure conversations — before something goes wrong.
“The conversations that feel too early are almost never too early. In peaceful parenting, we talk about connection before correction — and the same is true for the digital world.” - Lisa Smith, The Peaceful Parent
Inside, you’ll find:
This guide is designed to help you feel calm, confident, and prepared — not overwhelmed.
Bonus: Get 20% off BrightCanary’s Text Message Plus annual plan. Use code PARENTS20 at checkout.
Kids today are growing up in a completely different world than we did.
Algorithms shape how they see themselves. AI chatbots are becoming companions. And online interactions blur the line between real and risky. Most of this is happening quietly, on devices we rarely see.
Kids won’t always come to you when something feels off. But if you’ve already opened the door to conversation, they’re far more likely to. This guide helps you open that door early, and keep it open.
BrightCanary helps you stay informed about your child’s digital world, without reading every message or invading their privacy.
It’s the easiest way to stay involved while still respecting your child’s independence.
Yes. You can download and share it with caregivers, schools, or anyone raising kids.
The guide includes conversation starters for ages 10 through teens.
No. The guide stands on its own. BrightCanary adds real-time monitoring and alerts if you want extra support.
This guide focuses on what to say, giving you real language you can use immediately, grounded in connection-first parenting.
Your child’s digital world is already here. The best thing you can do isn’t to wait — it’s to start talking.
Download the Free Conversation Guide (PDF) and open the door today.

Two juries just told tech companies what parents already knew: social media causes harm to kids, and the “we’re just the platform” defense is not as strong as it once was.
In March, a New Mexico jury found Meta liable over child exploitation risks on its platform and imposed a $375 million penalty. The next day, a Los Angeles jury found Meta and Google liable in a landmark social media addiction trial.
In both cases, it came down to how the platform algorithms push content to children. The argument was not just that harmful things existed on Instagram and YouTube. It was that the companies’ own systems were actively recommending, feeding, and amplifying content to children in ways that made harm more likely.
The content on online platforms is protected under Section 230 of the Communications Decency Act, which shields companies from liability for content posted by users. But these cases asked if online platforms are safe for children to use because of the way they are built and marketed today. These bellwether verdicts say no.
The Los Angeles case questions the idea that tech companies are passive hosts with no responsibility for how their systems shape behavior.
The claim was that the products themselves, including Instagram and YouTube, were designed in ways that made harm more likely for young users. These design features include the things meant to keep kids engaged and scrolling: endless feeds, auto-play loops, and recommendation systems. The jury found that the way the platforms were built and operated could itself be part of the harm.
In the New Mexico case, the plaintiff argued that Meta presented Facebook, Instagram, and WhatsApp as safe for families, while knowing the platforms exposed children to sexual solicitation, exploitation, and serious mental health risks.
The New Mexico attorney general’s office compiled evidence with a good old-fashioned sting operation, in which synthetic decoy child accounts were created. Within hours those accounts were targeted with harmful content and predatory behavior. The state said Meta concealed what it knew, made false or misleading statements about safety, and took unfair advantage of children’s vulnerability and inexperience while continuing to profit from the platforms.
The jury agreed and found Meta liable under New Mexico consumer protection law. They treated it like a classic product safety case: You said the product was safe. You knew serious dangers existed. You did not tell families the full truth.
In both cases, the Section 230 defense failed because the arguments weren’t about hosting harmful content, but rather harmful product design that was actively targeted to children.
Juries agree that concern about social media isn’t just panic. Adults do understand technology. Parents recognize that social media apps expose children to risks they are not prepared to manage on their own. The risks are real.
A child is not sleeping because their phone never turns off.
A teen is pulled into a spiral that harms their body image and self-worth from a stream of videos promoted to them.
Private messages are a place where strangers, scammers, or predators can operate because the platform won't act against them.
The important thing to understand is that these verdicts are the beginning. They are not the last word from the appellate courts. But they are not meaningless, either. They tell parents, lawmakers, and courts that a jury heard the evidence and found the companies responsible.
Stop treating social media as harmless. Stop assuming that because it is popular or recommended on the App Store that it is safe. We have to realize the risks that are built into products designed to capture attention and lower resistance, not only for our kids, but for ourselves.
Start by knowing what apps your child is using. Which ones allow private messages? Is screen time spilling into sleep, secrecy, or distress? Is your child suddenly defensive about certain apps or conversations?
BrightCanary can help. The app has a proven track record with tools to monitor your child’s online activity across every app they use, including texts and direct messages. You’ll get concerning content alerts in real time, plus insights about your child’s digital life and even tips to help you start important conversations. Download today to start your free trial.
These platforms are not risky because bad people use them. They are risky because of how they are built, how they are marketed, and the content they push. Parents already understood that. The law is starting to catch up.

When I was asked to answer whether YouTube Shorts is safe for kids, I was already aware of some risks. Reader, let me tell you, when I dug into the research, I was floored.
Addiction, depression, sleep problems, and decreased attention span are just a handful of the dangers kids face from YouTube Shorts.
In fact, YouTube Shorts can be equally problematic as other short-form video platforms like TikTok and Instagram Reels. What I learned will definitely cause me to rethink how I let my own child use YouTube, and I encourage you to do the same.
YouTube Shorts allows users to create and view short-form videos. However, the viewing experience is far different from the longer videos that YouTube is most known for.
Shorts are accessed through a dedicated, social media-like scrolling feed. Users can interact with the videos by liking, commenting, and sharing them.
YouTube Shorts pose the same risks as longer videos on the app, like inappropriate content, cyberbullying, and exposure to predators. But the short-form nature of YouTube Shorts introduces additional risks, similar to the dangers kids face from platforms like TikTok.
The concise, high-intensity, fast-paced, and visually captivating nature of short-form videos encourages an immersive experience, which can lead to compulsive viewing behaviors and even addiction.
Studies have uncovered a direct correlation between addiction to short-form videos, like those on YouTube Shorts, and depression among adolescents.
It’s important to emphasize that the videos themselves aren’t inherently the problem; it’s when viewing behavior becomes addictive that mental health problems emerge.
Short-form video addiction is also linked with social anxiety in adolescents.
Numerous studies show that short-form video platforms are associated with greater inattentive symptoms in children.
Researchers suggest the frequent attention-switching that happens while watching these videos may decrease kids’ ability to focus on a singular task for prolonged periods.
A recent study found that teens who exhibit more severe symptoms of short-form video addiction were also more likely to report poorer sleep quality.
In order to encourage continued engagement, YouTube’s algorithms frequently recommend videos similar to what users have already consumed. This creates a potentially dangerous feedback loop where viewers are primarily fed content that reinforces the same beliefs and opinions.
These videos also encourage passive viewing rather than critical thinking and seeking out new information. This lack of exposure to different points of view can be particularly harmful to children and teens, who are still forming their worldview and sense of self.
Short-form video addiction can decrease students' motivation to learn and the sense of satisfaction and diminish the joy they get from the learning process.
Despite the risks, I don’t plan on banning my child from using YouTube. But I will take additional steps to keep him safe on the platform. Here are some ideas you can try as well:
Google recently rolled out additional parental controls that allow you to limit the amount of time your child spends scrolling through YouTube Shorts or to block short videos altogether.
These new controls, when used in combination with other YouTube parental controls, go a long way toward helping your child engage with the platform in a healthier manner.
YouTube Kids doesn’t have Shorts, so keeping your child on this platform is a great option. YouTube kids is designed for users up to age 12.
Requiring your child to watch YouTube in shared spaces, like the living room, makes it easier for you to keep an eye on what they view.
Occasionally sit with your kid and watch YouTube with them to see what they’re interested in and what the algorithm is feeding them.
Even the most vigilant parent can’t catch it all. That’s why BrightCanary’s YouTube monitoring includes YouTube Shorts. The app reports on what your child watches and searches for on YouTube so you don’t have to vet every video yourself. Here’s how:
YouTube Shorts can be safe for kids, provided parents take proper precautions. Utilizing parental controls, including limiting how long they can spend scrolling Shorts and monitoring their use, are two vital safeguards if you plan to let your child use the app.
BrightCanary helps you monitor your child’s activity on the apps they use the most, including YouTube Shorts. Download today to get started for free.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
🧠 More than half of TikTok’s ADHD content is misinformation: Online platforms are flooded with misleading or unsubstantiated mental health content, according to new research. On TikTok alone, 52% of ADHD-related videos and 41% of autism videos were found to be inaccurate. YouTube averaged 22% misinformation on the same topics.
Content created by healthcare professionals was consistently more accurate, but professional voices represent only a small fraction of what's actually circulating on these platforms. (And that one influencer with the flashy editing and jump-cuts is way more engaging.) The content that spreads is the content that generates engagement, and emotionally resonant self-diagnosis videos do exactly that.
When teens absorb inaccurate information about mental health — especially about their own potential diagnoses — it can shape how they understand themselves, how they talk to doctors, and whether they seek the right kind of help. It can also normalize self-labeling in ways that feel affirming in the short term but complicate actual support down the road.
What parents can do: If your child brings home a TikTok-informed self-diagnosis, resist the urge to dismiss it outright. Instead, treat it as an opening: "That's interesting — what made you feel like that applies to you?" If the concern feels real, bring it to a professional rather than letting the algorithm be the final word.
🔞 OpenAI plans to introduce adult content to ChatGPT, but age-verification is already failing: OpenAI CEO Sam Altman announced that ChatGPT will begin allowing erotica for verified adults, with a rollout expected later this year. We’re not here to yuck anyone’s yum, but the concern — voiced loudly by, among others, billionaire Mark Cuban — is that the age-verification system isn’t there yet, , and kids will be the ones impacted most.
OpenAI’s age-verification system misclassifies minors as adults 12% of the time, and we’ve found that existing safety features on ChatGPT are a bust. Cuban’s perspective: "This isn't about porn. That's everywhere. Including here [on X]. This is about the connection that can happen and go into who knows what direction with some kid who used their older sibling's log in." (Case in point: Character.ai limited the way teens use its platform following lawsuits, but other explicit AI chatbot platforms like Polybuzz are thriving.)
For parents, the practical takeaway is the same one that applies to every platform that promises age-gating: the gate is not the protection. Your child's understanding of why certain content is harmful, and their ability to come to you when something feels wrong, is. BrightCanary monitors everything your child types across all apps, including ChatGPT — so if something concerning is happening, you'll know about it.
📺 Why harmful content keeps reaching kids — and what advertising has to do with it: There’s an economic reason for why platforms keep serving harmful content to kids, according to researchers writing in The Conversation: recommendation algorithms are designed to maximize engagement, not to distinguish between helpful and harmful content. And emotionally charged content (that which provokes fear, anxiety, outrage, or shock) consistently generates more engagement than neutral material.
Because many social platforms are funded by advertising revenue, and advertising revenue depends on attention, the incentive to serve that content never goes away, regardless of what a platform's safety team is doing on the other side of the building. That’s one of the reasons the same issues keep recurring across different platforms and years, and why parental involvement remains essential regardless of what any platform promises. Curious to learn more? We've written about how social media algorithms work and how to talk to your kids about them.
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Bullying doesn't always look like name-calling. Online, it can be subtler … and harder for kids to name. Use these conversation starters to check in. That last question is the most important one to get an honest answer to.
🔐 Kids aren't learning cybersecurity in school — but parents can fill the gap. Save these five practical ways to teach kids digital security at home, from modeling good habits yourself to teaching them to question what they see.
📋 Pinterest CEO Bill Ready is backing a social media ban for kids under 16. “As both a CEO and a parent, I believe we need to be honest: social media as it exists today is not safe for kids under 16,” Ready wrote on LinkedIn. “We need clearer rules, better tools for parents, and more accountability across the tech ecosystem.”
💔A 9-year-old in Texas died after attempting a social media challenge she had seen online. JackLynn Blackwell passed away on February 3rd after attempting the blackout challenge, a dangerous trend that has been circulating on social media platforms for years. The CDC has documented at least 80 child deaths connected to this challenge. We don't share this to frighten you — we share it because awareness is a form of protection. Dangerous viral challenges are rarely announced; they spread quietly through feeds and group chats. Knowing what's circulating and having an open line of communication with your child can make a difference.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
📱Good news from Instagram (yes, really): Instagram is rolling out a new feature that alerts parents when their teen searches for suicide or self-harm content — including phrases that suggest a teen may be at risk. The alerts will go to parents enrolled in Instagram's parental supervision tools. Instagram says they set the threshold to require multiple searches within a short window, while still erring on the side of caution. While that means some alerts may not reflect a real crisis, this is a meaningful step overall. If your teen is on Instagram, now is a good time to make sure you're enrolled in parental supervision so these alerts actually reach you.
⚠️Bad news from Instagram (there it is): According to court documents from the ongoing federal lawsuit in California, Meta's own internal survey found nearly 1 in 5 teens aged 13 to 15 reported seeing unwanted nudity or sexual images on Instagram. The same survey found about 8% of that age group had seen someone harm themselves or threaten to do so on the platform.
These are Meta's own numbers. It's a useful gut-check as Instagram rolls out new safety features: progress is real, and so is the distance still to go.
🚫 Should we ban teenagers from social media? Earlier this year, Australia rolled out its first-of-its-kind social media ban for kids under 16. Similar proposals are circulating in the US and UK. But some argue that we shouldn’t ban teens from social media because kids will always find their ways around them, enforcement is difficult, and waiting until a child turns 16 doesn't actually teach them how to navigate the internet safely. It just delays the moment they're dropped in.
Our take: Why not limit access and create better guardrails? Smarter regulation matters. Platforms don't need to give kids access to features engineered for compulsive use: endless scroll, autoplay, algorithmically turbocharged feeds. Age verification should be meaningful, not performative. And content moderation for minors needs real teeth. But regulation alone isn't a parenting strategy. The goal isn't to keep kids off the internet forever. It's to raise kids who can handle it. That requires ongoing conversations, not just app settings or age cutoffs. When you’re ready to start monitoring social media, start here.
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Screen time tends to reach new heights as the school year hits its midpoint. Use these conversation starters to check in on how your teen is feeling about their digital habits … without it turning into a lecture:
😔 The deepfake crisis no one is talking about enough: New large-scale research from UNICEF, ECPAT, and INTERPOL found that at least 1.2 million children across 11 countries reported being victims of sexually explicit deepfakes in the past year.. This is an urgent and underreported crisis — and it's a reminder that online safety isn't just about screen time.
📊How much is your teen on their phone at school? More than an hour, on average — and most of that time is social media. A recent analysis of American teens found that adolescents aged 13 to 18 spend more than 8.5 hours daily on screen-based entertainment overall, with over an hour of phone use happening during the school day itself.
🤖 Teens still love TikTok: New Pew Research data puts some numbers to teen platform habits: 68% of teens ages 13–17 use TikTok, with roughly 1 in 5 saying they're on it almost constantly. About 1 in 5 teens also report nearly constant YouTube use, and 64% of teens use AI chatbots (about 3 in 10 do so daily).

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
⚖️ Is this Big Tech’s Big Tobacco moment? A landmark social media addiction trial is happening right now in Los Angeles. The trial centers on a 20-year-old woman who alleges that endless scrolling and other design features worsened her depression and suicidal thoughts. Snap and TikTok settled before the trial; Meta and YouTube are fighting the claims. Some observers are calling this Big Tech’s Big Tobacco moment — a reference to the tobacco litigation in the ‘90s that exposed internal documents, led to warning labels, and reshaped public health policy.
Meta CEO Mark Zuckerberg and Instagram chief Adam Mosseri have testified so far. Internal documents shown in court suggest Meta knew minors were using its apps below the age minimum, the company prioritized maximizing time spent scrolling, and safety recommendations from experts were sometimes disregarded. Meta disputes the characterization, arguing the documents are cherry-picked and outdated.
What’s striking is that Meta’s own internal research found that parental supervision tools did not meaningfully curb teens’ compulsive use. Even when parents use the tools the platforms provide, behaviors don’t significantly change — a finding that reinforces something we’ve talked about often: screen time limits and parental controls are not set-it-and-forget-it solutions.
They’re tools. Helpful and necessary ones. But tools alone don’t teach judgment, emotional regulation, or resilience.
The timing of the trial is especially notable. The day after Adam Mosseri testified that heavy social media use may be “problematic” but not clinically addictive, a new longitudinal study published in Nature found that teens who struggled to describe their feelings or avoid unpleasant emotions were more vulnerable to developing social media addiction over time.
What does it all mean? This trial is ongoing. Researchers and lawmakers around the world are increasingly worried about compulsive use. Hundreds of families and school districts are suing major platforms. And more bellwether cases are coming. If juries consistently find that addictive design harmed minors, the financial and regulatory consequences could be enormous.
For parents, this is a reminder that:
We designed BrightCanary to help parents stay involved and curious in their children’s digital lives. Because technology safety is a skill, not a setting.
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Believe it or not, we’re about halfway through the academic year. This is a great time to zoom out and reset goals — both academic and personal. These conversation-starters help teens connect their daily habits to their bigger ambitions.
🧍♀️ What is the internet like for a 15-year-old girl? In this evocative essay, an anonymous teen describes being inundated with misogyny online. (Language warning.) It’s a sobering reminder that algorithms don’t just show content — they shape culture.
🧸 The villain of Toy Story 5 is … tablets. Pixar’s most nostalgic franchise is confronting “iPad kid” culture head-on. The new trailer shows Woody, Buzz, and the gang competing with iPads for kids’ attention. Art imitates life, after all. What do you think about the trailer?
👾 Discord is rolling out age verification for users. What does it mean, and why is your teen so upset about it? We explain.

I’ve written a lot about how social media is detrimental to kids’ mental health. But witnessing the effort some teens in my life put into selfies motivated me to explore the impact these platforms have on young people’s self-esteem in particular. Does the pressure to be perfect online hurt the way they feel about themselves? I discovered the answer is a solid (and, frankly, unsurprising) yes.
Heightened attention to physical appearance and wavering self-esteem is normal for teens, due in part to developing bodies and an increased awareness of social comparison. Here’s how social media has supercharged this:
Social media prompts unhealthy comparisons in users of all ages. But adolescents' prefrontal cortexes aren’t fully formed, so they process videos and images they see online in a particularly harmful way, literally changing their still-developing brains.
Teens are bombarded with curated, heavily edited images online. Research suggests that these unrealistic beauty standards can significantly change their perception of attractiveness, including how they rank themselves in comparison.
It’s not just viewing altered images that’s a problem. Using filters and editing tools to maximize their own physical attractiveness can also lead to lower self-image. This is particularly stark among teens of color due to racial biases in social media beauty filters. Often modeled on white people, filters reinforce racist ideals of attractiveness.
This conversation often focuses on girls, but boys are also harmed. In one study, nearly every boy reported being exposed to content about appearance such as building muscle and having a certain jawline. Research shows that the more time boys spend on social media, the lower their body satisfaction.
Another way young boys are impacted is that they’re frequently fed a narrow idea of what it means to be male. Exposure to content insisting they must build muscle and have lots of money to impress girls is associated with anxiety, feelings of isolation, and low self-esteem in boys.
While self-esteem around physical appearance takes a particular hit, it’s not the only area that suffers. Constant comparison with others’ social lives and achievements creates feelings of not measuring up.
Here are some signs that may indicate your teen’s self-esteem is suffering due to social media:
Here’s how to help your teen’s self-esteem survive social media:
Social media algorithms are like echo chambers, amplifying the number of image-focused posts teens are exposed to. In fact, two in three boys report being fed content that promotes stereotypes about masculinity without seeking it. Help your teen periodically reset their algorithm.
Adolescents with strong offline relationships exhibit higher self-esteem. Encourage your teen to hang with their friends in person.
Help your teen understand the interaction between social media and self-image. Give them opportunities to process those feelings and encourage them to pull back or take a break from social media when it makes them feel bad.
Adults aren’t immune to the vicious cycle of social media comparison. But seeing you negatively compare yourself to what you see online sets a harmful example for your child.
This is an instance where we need to fake it till we feel it, folks. Work out your own social media-induced insecurities with a friend or therapist and keep that business away from your impressionable offspring.
Overall, there’s a societal acceptance of body dissatisfaction in teens (especially girls). This creates a dangerous environment for teens because their feelings of inadequacy over what they see online are easily overlooked as typical.
Monitor what your child does online and how it makes them feel, and don’t dismiss your instincts when you suspect something is wrong.
BrightCanary helps you keep an eye on social media’s impact on your teen.
You get:
Exposure to heavily edited images, unrealistic beauty standards, and unhealthy portrayals of gender roles on social media negatively impact teens’ self-esteem. You can help by keeping an eye on your child’s activity online, resetting their algorithm, teaching them digital literacy, and modeling a healthy relationship with social media.
BrightCanary helps you monitor your child’s activity on social media by monitoring everything they type across all apps. Download today to get started with a free trial.