How to Reset Your Child’s Social Media Algorithm

By Andrea Nelson
October 19, 2023
Tween girls taking selfies together

As a parent, you want your child to surround themselves with good influences. That’s true not only for who they spend time with in real life, but also for the people and ideas they’re exposed to on social media. 

If you or your child are concerned about the content appearing in their feed, one beneficial step you can take is to help them reset their social media algorithm. Here’s how to reset your child’s algorithm on TikTok, Instagram, and other platforms.

What is a social media algorithm?

Social media algorithms are the complex computations that operate behind the scenes of every social media platform to determine what each user sees. 

Everything on your child’s social media feed is likely the result of something they liked, commented on, or shared. (For a more comprehensive explanation, check out our Parent’s Guide to Social Media Algorithms.)

Social media algorithms have a snowball effect. For example, if your child “likes” a cute dog video, they’ll likely see more of that type of content. However, if they search for topics like violence, adult material, or conspiracy theories, their feed can quickly be overwhelmed with negative content.

Therefore, it’s vital that parents actively examine and reset their child’s algorithm when needed, and also teach them the skills to evaluate it for themselves. 

Research clearly demonstrates the potentially negative impacts of social media on tweens and teens. How it affects your child depends a lot on what’s in their feed. And what’s in their feed has everything to do with algorithms. 

Talking to your child about their algorithm

Helping your child reset their algorithm is a wonderful opportunity to teach them digital literacy. Explain to them why it’s important to think critically about what they see on social media, and what they do on the site influences the content they’re shown. 

Here are some steps you can take together to clean up their feed: 

Start with their favorite app

Resetting all of your child’s algorithms in one fell swoop can be daunting. Instead, pick the app they use the most and tackle that first. 

Scroll through with them

If your kiddo follows a lot of accounts, you might need to break this step into multiple sessions. Pause on each account they follow and have them consider these questions:

  • Do this person’s posts usually make me feel unhappy or bad about myself? 
  • Does this account make me feel like I need to change who I am? 
  • Do I compare my life, body, or success with others when I view this account? 

If the answer “yes” to any of these questions, suggest they unfollow the account. If they’re hesitant — for example, if they’re worried unfollowing might cause friend problems — they can instead “hide” or “mute” the account so they don’t see those posts in their feed. 

Encourage interaction with positive accounts 

On the flip side, encourage your child to interact with accounts that make them feel good about themselves and portray positive messages. Liking, commenting, and sharing content that lifts them up will have a ripple effect on the rest of their feed. 

Dig into the settings 

After you’ve gone through their feed, show your child how to examine their settings. This mostly influences sponsored content, but considering the problematic history of advertisers marketing to children on social media, it’s wise to take a look.  

Every social media app has slightly different options for how much control users have over their algorithm. Here's what you should know about resetting the algorithm on popular apps your child might use.

How to reset Instagram algorithm

  • Go to Settings > Ads > Ad topics. You can view a list of all the categories advertisers can use to reach your child. Tap “See less” for ads you don’t want to see. 
  • Go to your child’s profile > tap Following > scroll through the categories to view (and unfollow) the accounts that appear most in your child’s feed.
  • Tap the Explore tab in the bottom navigation bar and encourage your child to search for new content that matches their interests, like cooking, animals, or TV shows.

How to reset TikTok algorithm

  • Go to Settings > Content Preferences > Refresh your For You feed. This is like a factory reset of your child’s TikTok algorithm.
  • Go to Settings > Free up space. Select “Clear” next to Cache. This will remove any saved data that could influence your child’s feed.
  • As your child uses TikTok, point out the “Not Interested” feature. Tap and hold a video to pull up this button. Tapping “Not interested” tells TikTok’s algorithm not to show your child videos they don’t like. 

How to reset YouTube algorithm

  • Go to Library > View All. Scroll back through everything your child has watched. You can manually remove any videos that your child doesn’t want associated with their algorithm — just then tap the three dots on the right side, then select Remove from watch history.
  • Go to Settings > History & Privacy. Tap “Clear watch history” for a full reset of your child’s YouTube algorithm.

What to watch for

To get the best buy-in and help your child form positive long-term content consumption habits, it’s best to let them take the lead in deciding what accounts and content they want to see. 

At the same time, kids shouldn't have to navigate the internet on their own. Social platforms can easily suggest content and profiles that your child isn't ready to see. A social media monitoring app, such as BrightCanary, can alert you if your child encounters something concerning.

Here are a few warning signs you should watch out for as you review your child's feed: 

If you spot any of this content, it’s time for a longer conversation to assess your child’s safety. You may decide it’s appropriate to insist they unfollow a particular account. And if what you see on your child’s feed makes you concerned for their mental health or worried they may harm themselves or others, consider reaching out to a professional.  

In short 

Algorithms are the force that drives everything your child sees on social media and can quickly cause their feed to be overtaken by negative content. Regularly reviewing your child’s feed with them and teaching them skills to control their algorithm will help keep their feed positive and minimize some of the negative impacts of social media. 

Woman smiling at phone while sitting on couch

Just by existing as a person in 2023, you’ve probably heard of social media algorithms. But what are algorithms? How do social media algorithms work? And why should parents care? 

At BrightCanary, we’re all about giving parents the tools and information they need to take a proactive role in their children’s digital life. So, we’ve created this guide to help you understand what social media algorithms are, how they impact your child, and what you can do about it. 

What is a social media algorithm? 

Social media algorithms are complex sets of rules and calculations used by platforms to prioritize the content that users see in their feeds. Each social network uses different algorithms. The algorithm on TikTok is different from the one on YouTube. 

In short, algorithms dictate what you see when you use social media and in what order. 

Why do social media sites use algorithms?

Back in the Wild Wild West days of social media, you would see all of the posts from everyone you were friends with or followed, presented in chronological order. 

But as more users flocked to social media and the amount of content ballooned, platforms started introducing algorithms to filter through the piles of content and deliver relevant and interesting content to keep their users engaged. The goal is to get users hooked and keep them coming back for more.  

Algorithms are also hugely beneficial for generating advertising revenue for platforms because they help target sponsored content. 

How do algorithms work? 

Each platform uses its own mix of factors, but here are some examples of what influences social media algorithms:

Friends/who you follow 

Most social media sites heavily prioritize showing users content from people they’re connected with on the platform. 

TikTok is unique because it emphasizes showing users new content based on their interests, which means you typically won’t see posts from people you follow on your TikTok feed. 

Your activity on the site

With the exception of TikTok, if you interact frequently with a particular user, you’re more likely to see their content in your feed. 

The algorithms on TikTok, Instagram Reels, and Instagram Explore prioritize showing you new content based on the type of posts and videos you engage with. For example, the more cute cat videos you watch, the more cute cat videos you’ll be shown. 

YouTube looks at the creators you interact with, your watch history, and the type of content you view to determine suggested videos. 

The popularity of a post or video 

The more likes, shares, and comments a post gets, the more likely it is to be shown to other users. This momentum is the snowball effect that causes posts to go viral. 

Why should parents care about algorithms? 

There are ways social media algorithms can benefit your child, such as creating a personalized experience and helping them discover new things related to their interests. But the drawbacks are also notable — and potentially concerning. 

Since social media algorithms show users more of what they seem to like, your child's feed might quickly become overwhelmed with negative content. Clicking a post out of curiosity or naivety, such as one promoting a conspiracy theory, can inadvertently expose your child to more such content. What may begin as innocent exploration could gradually influence their beliefs.

Experts frequently cite “thinspo” (short for “thinspiration”), a social media topic that aims to promote unhealthy body goals and disordered eating habits, as another algorithmic concern.

Even though most platforms ban content encouraging eating disorders, users often bypass filters using creative hashtags and abbreviations. If your child clicks on a thinspo post, they may continue to be served content that promotes eating disorders

Social media algorithm tips for parents

Although social media algorithms are something to monitor, the good news is that parents can help minimize the negative impacts on their child. 

Here are some tips:

Keep watch

It’s a good idea to monitor what the algorithm is showing your child so you can spot any concerning trends. Regularly sit down with them to look at their feed together. 

You can also use a parental monitoring service to alert you if your child consumes alarming content. BrightCanary is an app that continuously monitors your child’s social media activity and flags any concerning content, such as photos that promote self-harm or violent videos — so you can step in and talk about it.

Stay in the know

Keep up on concerning social media trends, such as popular conspiracy theories and internet challenges, so you can spot warning signs in your child’s feed. 

Communication is key

Talk to your child about who they follow and how those accounts make them feel. Encourage them to think critically about the content they consume and to disengage if something makes them feel bad. 

In short

Algorithms influence what content your child sees when they use social media. Parents need to be aware of the potentially harmful impacts this can have on their child and take an active role in combating the negative effects. 

Stay in the know about the latest digital parenting news and trends by subscribing to our weekly newsletter

Teen boy gaming in front of couch

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • A new report reveals how gambling is quickly becoming the new normal for boys.
  • Experts warn AI in schools may undermine learning and social development.
  • Screen time limits alone aren’t enough anymore, warns the American Academy of Pediatrics.

Digital parenting

🎰 Gambling is becoming alarmingly common among boys: A new report from Common Sense Media is a wake-up call for parents: 36% of boys ages 11–17 gambled in the past year. And we aren’t talking about slots or poker — the report looked at sports betting apps, loot boxes, skin cases, gacha-style rewards inside video games, and social media feeds that normalize betting. Nearly one in four boys have engaged in gaming-related gambling, and most spent real money doing it. Some stats:

  • 12% of boys bet on sports, including fantasy leagues and small peer bets
  • 12% engaged in traditional gambling, with older teens far more likely
  • 6 in 10 boys see gambling ads on YouTube and social media
  • Gambling is highly social: over 80% of boys gamble if their friends do, compared to under 20% if their friends don’t

While many boys describe gambling as “low-stakes” or just part of bonding with friends or family (one-third have gambled with family members), 27% of boys who gamble report negative effects like stress or conflict. The report also highlights a major loophole: while gambling is illegal for minors, in-game gambling mechanics often aren’t regulated the same way, making it easy for kids to spend (or lose) real money.

What parents can do: Start conversations early, recognize that gambling comes in many forms, set clear rules around spending and games, monitor influences (friends, online activity, and games), and watch for warning signs like secrecy or emotional changes.

🤖 The risks of AI in schools may outweigh the benefits: A new study from the Brookings Institution suggests that while AI tools are being rapidly adopted in classrooms, the risks currently outweigh the benefits — especially for kids’ cognitive and social development. Researchers warn of a “doom loop” where students offload thinking to AI, weakening problem-solving and learning skills over time. There are also concerns about kids developing social and emotional habits through chatbots designed to agree with them, making real-world disagreement and collaboration harder.

UNICEF recommends that parents talk to kids early about what AI is, warn against sharing personal information with AI tools, watch for signs of overuse or behavioral changes, and stay involved in how AI is used for school and beyond. Not sure where to start? Check out our free AI safety toolkit for parents (plus a free code for BrightCanary — send it to another parent!).

📵 Why screen time limits alone aren’t enough anymore: The American Academy of Pediatrics says it’s time to rethink how we manage kids’ screen use. New guidance emphasizes that time limits alone don’t address the real issue: digital platforms are intentionally designed to keep kids engaged through autoplay, notifications, and algorithmic feeds.

Screen time doesn’t tell the whole story anymore. Instead of rigid rules, parents are encouraged to focus on how screens are used, what content kids are engaging with, and how digital life affects sleep, learning, and mental health. Think less stopwatch, more strategy. BrightCanary is designed to help parents stay informed about their child’s activity across all the apps they use — so you know not only what apps your kiddo is using, but also what they encounter. Here’s how to start monitoring (without breaking trust).


Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.


Tech talks

Many kids don’t think they’re gambling … even when they absolutely are. Your goal is to help kids recognize risks before habits form. These conversation-starters can help you open the door without judgment:

  1. “Have you ever spent money in a game for a chance to win something random?”
  2. “What kinds of bets do kids your age joke about or make with friends?”
  3. “Do you see sports betting or gambling content on YouTube or social media?”
  4. “How do games make it feel exciting to spend money — and how do they make money back?”
  5. “What would you do if a game or app started making you feel stressed or pressured?”

What’s catching our eye

📱 TikTok gets an American makeover: TikTok officially has US-based owners. So, the app isn’t going anywhere — but the experience won’t stay the same. Experts say changes will likely show up first in moderation and data practices, not features. If your child uses TikTok, use the Family Pairing feature to set guardrails around their use.

🧹 YouTube takes down major AI slop channels: Following a report showing massive growth in low-quality AI-generated content, YouTube appears to have removed several top “AI slop” channels with millions of subscribers.

🪪 Discord rolls out global age verification: Starting next month, Discord will require face scans or ID for full access. Accounts default to a teen-safe experience unless verified as adult — with stricter filters and protections baked in.

Teen boy stressed in front of chalkboard

According to a survey by the Pew Research Center, 64% of teens report using generative AI chatbots like ChatGPT for everything from homework help to companionship. But a startling concern is emerging among experts. Early research suggests that overreliance on generative AI could lead to cognitive atrophy and the loss of brain plasticity. Or, as the kids say: brain rot

As a parent who is determined to teach my kids how to use AI responsibly, I’ve been watching this issue closely. Here’s what to know about how overusing AI impacts the brain and how to protect your child’s cognitive abilities in the face of this new technology.    

What are the cognitive dangers to kids from overreliance on AI?

Generative AI is in its infancy, and so is the research on this topic. But cognitive offloading is likely to blame for AI’s impact on kids’ cognitive health. 

Cognitive offloading happens when people use external tools or resources to reduce mental effort. On the one hand, this process can help people accomplish tasks faster. On the other hand, all of that offloading can be harmful for developing brains.

1. Using AI hinders skills such as writing and reasoning

Experts suggest cognitive offloading erodes critical thinking and reasoning skills

When AI always provides the answers, kids miss out on the opportunity to develop foundational life skills like problem-solving and deep thinking. 

For example, learning to write is deeply intertwined with learning to think. However, offloading writing tasks degrades students’ ability to organize and express their thoughts.

2. Overuse of AI weakens the way the brain absorbs information

When kids offload tasks to AI without doing any leg work, their ability to perform independent research and analyze materials decreases. Students end up with only a superficial understanding of information — they can state the what, but don’t grasp the why or how

3. Children and teens are the most vulnerable

Research has shown that younger users demonstrate a higher dependence on AI tools when compared to older users, and that the corresponding decline in their critical thinking is also greater.

The brain is particularly malleable during childhood and adolescence, making kids and teens especially vulnerable to the impacts of AI.

Because younger children are more likely to anthropomorphize, or assign human properties to inanimate objects, experts suggest that even simple praise from an AI chatbot can greatly change their behavior.  

How can I help my child use AI in a healthy way? 

The sooner you start teaching your child to use AI smartly, the more you can buffer its effect on their brain. 

1. Help your child build AI literacy 

To help your child gain AI literacy, teach them:

  • How AI tools work. Here’s a great primer for kids
  • AI can be wrong. From hallucinations to faulty data to fraud, AI doesn’t always get the facts straight. 
  • AI contains bias. AI is trained on data from humans, and humans are inherently biased. Therefore, so is AI. 
  • How overuse of AI can impact their brain. Ask open-ended questions like, "AI can give quick answers, but what do you think happens to our brains when we don't have to work hard to solve things?"

2. Teach your child to use AI as a tool, not a crutch

AI isn’t inherently harmful. The key is using it to support thinking, not replace it. Encourage your child to:

  • Generate their own ideas. 
  • Limit their use of AI, and explain that moderation is key.
  • Use AI as a starting point for research, but independently verify facts.
  • Write first drafts themselves to gain the cognitive benefits of organizing and expressing their thoughts.
  • Focus on using AI to improve productivity rather than offloading thinking.
  • Think critically about the material produced by AI.

3. Model a balanced approach to AI

  • Examine your own use (or overuse) of AI.
  • Openly question the information you encounter on AI.
  • Maintain a curiosity mindset and let your kids see you engaging in activities and pursuits without the use of AI.

How BrightCanary helps you monitor your child’s AI use

BrightCanary helps you monitor how your child engages with AI by scanning everything they type on their iPhone or iPad. Use it to: 

  • Monitor their activity across every app. 
  • Access summaries of their activity.
  • Read full transcripts when you need more details.
  • Get real-time alerts if your child types anything concerning on an AI platform (or any other app). 

In short 

Overreliance on generative AI may lead to a decline in cognitive skills such as critical thinking, reasoning, and the ability to analyze and understand information. Because their brains are especially malleable, children and teens are particularly vulnerable to the impacts of AI on the brain. It’s important to teach your child AI literacy, show them how to use the tool responsibly, and monitor how they use it. 

BrightCanary helps you monitor your child’s activity on the apps they use the most, including all AI platforms. Download today to get started for free.

Children looking at tablet

It will come as no surprise to parents that YouTube is all the rage with kids. In fact, recent research suggests that nine out of 10 kids use YouTube, and kids under 12 favor YouTube over TikTok. With all of YouTube’s popularity, how can you make the platform safer for your child? Read on to learn how to set parental controls on YouTube. 

Why parental controls matter

As the name implies, YouTube is a platform for user-generated content. While this creates an environment ripe for creativity, it also means there’s a little bit of everything, including videos featuring violent and sexual content, profanity, and hate speech. 

Because YouTube makes it easy for kids to watch multiple videos in a row, there’s always the chance your child may accidentally land on inappropriate content. In addition, the comments section on YouTube videos are often unmoderated and can be full of toxic messages and cyberbullying. 

Due to the risks, it’s important that parents monitor their child’s YouTube usage, discuss the risks with them, and use parental controls to minimize the chance they’re exposed to harmful content. 

How to set parental controls on YouTube

YouTube offers a variety of options for families looking to make their child’s viewing experience as safe as possible. Here are some important steps parents can take: 

Create a supervised Google account for YouTube

A supervised account will allow you to manage your child’s YouTube experience on the app, website, smart TVs, and gaming consoles. 

Select a content setting

There are three content setting options to choose from: 

  • Explore: Content rated for viewers 9+. This category also excludes live streams, with the exception of Premieres
  • Explore more: For viewers 13+. This setting includes a larger set of videos, including live streams. 
  • Most of YouTube: For viewers 13-17. This option has almost everything on YouTube, but excludes content marked as 18+ by either channels or YouTube’s systems or reviewers. 

Set parental controls

Along with content settings, here are some additional YouTube parental controls to explore: 

  • Block specific channels: When monitoring your child's YouTube usage, if you encounter content you prefer they avoid, you have the option to block that channel. 
  • Review your child’s watch history: When you can't supervise their viewing at the moment, you can check what your child has been watching.  
  • Control video suggestions: If you don’t like the videos YouTube’s algorithm is suggesting for your child, try these steps to reset their YouTube algorithm:
    • Clear history
    • Pause watch history 
    • Pause search history
  • Disable Autoplay: This setting prevents YouTube from automatically playing the next suggested video.
  • Set time limits: If you need a little help enforcing screen time limits, this option shuts down the app when your child reaches their max. 

Parents will also be able to set specific restrictions on YouTube Shorts, the platform's short-form video experience similar to TikTok. Soon, parents can set time limits on Shorts, as well as custom reminders for bedtime and taking screen time breaks. As of this writing, this feature isn't yet available.

For step-by-step instructions for setting up parental controls, refer to this comprehensive guide by YouTube. 

Where parental controls on YouTube fall short

While YouTube offers an impressive array of parental control settings, you have to manually review your child’s content and watch history in order to catch any concerning content. 

BrightCanary is a parental monitoring app that fills in the gaps. Here’s how BrightCanary helps you supervise your child’s YouTube activity:

  • The app reports on what your child is watching and searching for, so you don’t have to watch each video on your own.
  • Advanced technology automatically scans your child’s video activity and flags anything concerning, so you’ll know when you need to step in.
  • You can either view all of their YouTube activity, or just review any videos flagged as concerning.
  • You can also monitor Google activity, texts, social media, and more — more coverage than other parental control apps on Apple devices.

YouTube vs. YouTube Kids

For parents looking for additional peace of mind, YouTube Kids provides curated content designed for children from preschool through age 12. 

For households with multiple children, parents can set up an individual profile for each child, so kids can log in and watch videos geared toward their age. YouTube Kids also allows parents to set a timer of up to one hour, limiting how long a child can use the app. 

Parents should be aware that switching to YouTube Kids isn’t a perfect solution. There’s still a chance that inappropriate content may slip through the filters. 

In fact, a study by Common Sense Media found that 27% of videos watched by kids 8 and under are intended for older audiences. And for families concerned about ads, YouTube Kids still has plenty of those — targeted specifically toward younger children. Keeping an eye on what your child is watching and talking to them about inappropriate videos and sponsored content is still a good idea, even with YouTube Kids. Fortunately, you can also monitor YouTube Kids with BrightCanary.

It’s also worth noting that kids under 12 who have a special interest they want to pursue may find YouTube Kids limiting. A child looking to watch Minecraft instructional videos or do a deep dive into space exploration, for example, can find a lot more options on standard YouTube — plenty of which are perfectly appropriate for kids, even if they aren’t specifically geared toward them. It’s cases like this where parental controls and active monitoring are especially useful. 

The takeaway

YouTube is a popular video platform with plenty to offer kids. It’s not without risks, though. Parents should monitor their child’s use and take advantage of parental controls to ensure a safe, appropriate viewing experience. 

Young girl watching YouTube videos on iPad

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • More than 20% of your kid’s YouTube feed is now AI slop.
  • Teens prefer chatting with AI bots that talk like their best friend — and that’s a red flag.
  • Does your kid say they need to use their cell phone during school for homework? Research says … they’re more likely to be on TikTok, Snapchat, or Instagram.

Digital parenting

🤖 More than 20% of YouTube is now AI slop: If your child’s YouTube feed feels … weird lately, you’re not imagining it. A new report from video editing firm Kapwing found that 21% of the first 500 YouTube Shorts shown to a brand-new account were AI-generated, and 33% qualified as “brain rot” content: hyper-stimulating, low-effort videos designed to farm views rather than inform or entertain. And it’s not just Shorts: The Guardian reports that nearly 1 in 10 of the fastest-growing YouTube channels globally now post only AI-generated videos.

Algorithms don’t care if content is junk — they care if it keeps kids watching. This is a good moment to talk with your child about how algorithmic recommendation systems work, why “popular” doesn’t always mean “good,” and how to recognize content that’s meant to hook, not help.

🤝 Teens prefer AI chatbots that feel like “best friends” — and that’s a red flag: New research raises concerns about AI chatbots designed to sound like emotionally supportive humans. Researchers found that most adolescents prefer AI that communicates like a “best friend,” rather than systems that make it clear they’re not human. Teens who preferred their AI BFF reported higher stress and anxiety, and lower-quality relationships with family and peers — indicating that they may be more emotionally vulnerable when it comes to befriending AI. 

The authors argue that clear boundaries, repeated reminders that AI isn’t human, and stronger AI literacy should be treated as core safety features, not optional add-ons. If your child uses AI chatbots like ChatGPT or Polybuzz, reinforce that they shouldn’t replace real relationships or emotional support.

📱 Kids are spending over an hour of the school day on their phones: According to new research published in JAMA, American teens ages 13–18 spend an average of 70 minutes of the school day on their phones — mostly social media apps like TikTok, Instagram, and Snapchat. They also spent an average of nearly 15 minutes each day on gaming apps and almost 15 minutes on video apps such as YouTube, all during school hours. 

If your school district isn’t one of the growing numbers of schools banning phones, experts recommend keeping phones out of reach during class time, such as in lockers or pouches. At the very least, have your child turn off their phone when they get to school or use Apple Screen Time to set Do Not Disturb limits. The goal isn’t punishment. You’re helping kids protect their attention while they’re still learning how. 


Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.


Tech talks

Use these conversation-starters to spark meaningful discussions this week about attention and connection:

  1. “What kinds of videos show up on your feed that feel like a waste of time?”
  2. “How do you decide when a game or app is helping you relax, versus stressing you out?”
  3. “Do you ever feel pressure to keep scrolling even when you’re bored?”
  4. “What’s something you enjoy doing that screens sometimes get in the way of?”
  5. “If we changed one screen habit as a family, what should it be?”

What’s catching our eye

✍️ TikTok signed a deal creating a new U.S.-based joint venture backed by Oracle and other American investors. It’s still unclear whether U.S. users will need to migrate to a new app, but ByteDance says it won’t control U.S. user data or the algorithm.

📵 We’re one month into Australia’s social media ban for kids under 16. Some teens report feeling “free” and more present, while others quickly found workarounds using fake birthdays or switched to messaging apps like WhatsApp and Discord, the BBC reports.

💬 Character.AI and Google have agreed to settle lawsuits brought by families of teens who harmed themselves after interacting with AI chatbots. Character.AI has since banned users under 18 from open-ended chats.

📍 A Texas father used phone-based parental controls to track and help rescue his 15-year-old daughter after she was kidnapped. It’s a sobering reminder that safety tools matter — particularly in situations where time is of the essence.

Teen recording fight on phone

Teen crime in the U.S. is historically low, but that statistic masks a troubling trend parents can’t afford to ignore. In recent years, there’s been a disturbing uptick in violence linked to social media, from fight compilations and “stomp outs” to gang activity and assaults coordinated online.

This trend raises a critical question: does social media promote violence among teens? In this article, we’ll break down how social media and violence interact, what the research says about teen behavior, and steps parents can take to reduce their child’s exposure and risk.

Is teen violence rising because of social media? 

Violence among teens is on the rise on social media. After a pandemic-era spike, youth violence has been on a downward trajectory. But recently, a number of cities have seen an increase in violent crimes involving youth, with police citing social media as a frequent contributor to incidents.  

  1. Fight compilations. In this disturbing trend popular on YouTube, snippets of fights between everyday people, usually captured on phones, are stitched together into compilations. 
  2. Homicide. Some homicides are captured and posted on social media. Take the case of 16-year-old Preston Lord. A group known as the Gilbert Goons, who frequently recorded and posted their attacks on fellow teens, fatally beat Lord and bragged about the attack on social media.
  3. Gang activity. Street gangs have taken to social media to recruit new members and issue threats to rival gangs. 
  4. “Stomp outs.” In street slang, a “stomp out” refers to a gang attack where a victim is repeatedly kicked and stomped, often by multiple attackers. This can be done for the purposes of initiation or an attack on rivals, and these incidents increasingly end up online.

Does social media promote violence among teens? 

Numerous studies have found a link between witnessing violent activity on social media and real-life violence among teens. According to a 2024 report by the Youth Endowment Fund (YEF), nearly two-thirds of teens who reported perpetrating a violent incident in the prior 12 months preceding said that social media played a role. 

This correlation is likely due to several factors:

  1. Online arguments leading to in-person violence. Digital spats can quickly spill over into IRL conflicts, made worse by the fact that people are often emboldened to say things online that they never would face-to-face. 
  2. Exposure to violence on social media drives fear. In the YEF survey, only one in 20 teenagers said they carried a weapon, but one in three saw weapons on social media. This drives fear among teens and leads to some feeling the need to carry a weapon themselves. 
  3. Normalization of violence. Meta-analyses of the unhealthy effects of media violence show that youth who view violence online on a regular basis are more likely to display acceptance of and desensitization toward violent behavior. 
  4. The pursuit of likes. In an interview with PBS News, Commander Gabe Lopez, head of the Phoenix Police Department's Violent Crimes Bureau, shared his fear that young people are committing violent crimes “so they can post it on their social media feed, so they can get street cred, or so that they can get likes.”

How social media algorithms push violent content to teens

Social media sites use complex sets of rules and calculations, known as algorithms, to prioritize which content users see in their feeds. Here’s what you need to know about social media algorithms and violent content shown to teens: 

  • Even when kids don’t seek out violent content, they’re shown it anyway. According to the YEF study, 70% of teens are exposed to real-life violence on social media, one quarter of which is pushed to users by the platforms’ algorithms. 
  • The effect is often a snowball. If a teen pauses to watch a violent video in their feed, perhaps out of curiosity, they are more likely to be shown additional violent content. If kids actively seek violent content, the impact is even greater. 
  • Teens are most likely to see violence on TikTok. 30% of all 13 to 17-year-olds and 44% of TikTok users report exposure to violence on the platform, according to the YEF study.

How parents can protect teens from violence on social media

Here are some actions you can take today to combat the negative effects of social media and violence on your child. 

  1. Reset their algorithms. Periodically help your child reset their social media algorithms to clear out harmful content, such as violent videos. 
  2. Help them understand the bigger picture. Make it clear to them that the majority of teens don’t engage in violence and explain how social media can skew perception. 
  3. Monitor their social media use. Use digital check-ins and a parental monitoring app like BrightCanary to keep an eye on your child’s social media.

Social media and violence: the final word

Despite teen violence decreasing overall in recent years, there has been a spike in violent incidents where social media played a role. In addition, exposure to violent content on social media can lead to real-world violence among teens. Parents should help their children understand the ways that social media promotes violence, periodically reset their algorithms, and monitor their online activity for violent content. 

BrightCanary helps you monitor your child’s activity on the apps they use the most and sends you alerts when there’s an issue, including if they seek out or engage with violent content. Download today to get started for free.

Instagram "Your Algorithm" feature

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • Instagram will let users choose what shows up in Reels, which is as good a time as any to talk to your child about how algorithms work (we break it down for you).
  • New research links getting a smartphone before 13 to worse mental health outcomes.
  • Why experts are worried about AI chatbots and deepfake “thinspiration.”

Digital parenting

👉 Instagram will let your child pick what shows up in Reels: Instagram is doing something pretty unusual for a social media platform: explaining what’s under the hood. With a new feature called “Your Algorithm,” users can now see a summary of their recent interests and choose topics they want to see more or less of, like dialing up “jiu jitsu” and dialing down “AI cat videos.” 

For parents, this product update is also a conversation-starter with your teen. Social media algorithms aren’t neutral. They learn from behavior, reward attention, and quietly shape what kids see day after day. This feature offers a rare moment to pause and scroll and ask:

Why do you think Instagram thinks this is your interest?
How do videos like this make you feel after watching them for a while?
What would you want to see more of (or less of) if you had the choice?

Our take: Tools like this don’t “fix” social media, but they do help kids understand that feeds are designed to hook you based on your interests. The more teens understand how algorithms work, the better equipped they are to use platforms intentionally instead of getting pulled along for the ride. For more on this, browse our parent’s guide to social media algorithms, and learn how to reset your child’s algorithm on popular platforms.

🎁 Thinking about a smartphone for the holidays? Read this first: If a phone is on your child’s holiday wishlist, new research suggests it’s worth waiting. A large study published in Pediatrics found that kids who got their first smartphone before age 13 had significantly worse health outcomes than peers without phones:

  • 31% higher risk of depression
  • 40% higher risk of obesity
  • 62% higher risk of not getting enough sleep

Additionally, a new study from the American Psychological Association now directly ties short form video content with significantly diminished mental health and poor attention spans. 

The median age for getting a phone in the U.S. is now 11, which means many kids are entering middle school with a powerful device and very few guardrails. However, the takeaway from experts isn’t panic: it’s constraints. Use parental controls like Apple Screen Time to set restrictions on device use, and use a monitoring app like BrightCanary to stay informed about what your child encounters online. 

One simple, high-impact step? Keep phones out of bedrooms overnight. It’s not a cure-all, but it’s one of the easiest ways to protect sleep and manage device boundaries, even if your child already has a phone.


Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.


Tech talks

A few questions to help kids think critically about feeds, phones, and habits:

  1. “If you could redesign your social media feed, what would you want more of, and what would you get rid of?”
  2. “Do you think apps show you what you like, or what keeps you watching the longest?”
  3. “How do you usually feel after scrolling for 10 minutes? What about after an hour?”
  4. “What’s one app that helps you relax, and one that stresses you out?”
  5. “What rules do you think adults should follow with their phones, too?”

What’s catching our eye

📰 We were included in Wirecutter’s roundup of best parental control apps! Check us out under "Other parental control apps worth considering." 

🚫 “It was kind of scary, because social media is so present in my life, and to think it could be taken away like that so suddenly felt weird.” Australia’s social media ban kicked in last week, effectively banning teens under age 16 from using Instagram, YouTube, TikTok, and other major platforms. Here’s how teens are responding

🤖 Researchers warn that popular AI tools are offering dieting advice, tips for hiding disordered eating, and even generating hyper-personalized “thinspiration” images. Experts say this content can be especially dangerous for vulnerable teens — and much harder to spot than traditional social media posts.

child using iPad with parental controls on bed

For many families, the iPad feels like the “safe” device — the thing kids use before they’re ready for a smartphone. But iPads come with many of the same risks: exposure to inappropriate content, contact from strangers on apps like Roblox and YouTube, and unhealthy screen habits.

That’s why it’s important to take proper precautions, like setting up iPad parental controls and monitoring your child’s use. This guide explains how to put parental controls on your child’s iPad step-by-step, as well as how to monitor their activity in order to keep them safe.

Do I need to use iPad parental controls? 

Whether you have an “iPad kid” or a casual user on your hands, it’s vital that you use iPad parental controls. That’s because, while kids get some benefits from using iPads, they also face risks. 

  1. Exposure to inappropriate content. Unrestricted access to iPads can expose kids to explicit images, adult content, and violent videos.
  2. Stranger danger. Online spaces have surpassed offline ones as the environment where kids are most likely to be targeted by predators. Grooming can (and does) happen on apps your child probably has on their iPad, like Roblox and YouTube.
  3. YouTube. Picture a young child using an iPad, and you likely imagine them watching endless streams of YouTube videos. In addition to inappropriate content and contact with strangers, YouTube can expose kids to cyberbullying, dangerous algorithms, content that promotes self-harm and disordered eating, and more.
  4. Excessive screen time. Too much screen time can reduce physical activity, lead to problems in social-emotional development, and contribute to certain behavior problems. 

How to put parental controls on the iPad 

In order to put parental controls on your child’s iPad, you must first set up Family Sharing. Here’s how to do it:

  1. Go to Settings.
  2. Tap your name. 
  3. Select Family Sharing.
  4. Follow the prompts to set up your Family Sharing group.
  5. Add your child as a family member.

After you’ve set up Family Sharing, here are the parental controls we recommend: 

  • Screen Time. Screen Time allows you to view how much time your child spends on particular apps and websites and control the amount of time they spend on each screen activity.
  • Content Restrictions. Set filters and age restrictions for music and podcasts, movies and TV shows, books, apps, web content, and games. You can also filter explicit language on Siri. 
  • App Limits. With this feature, you can choose which preinstalled apps your child is allowed to use. 
  • Downtime. Use this to select the days and times when your child is blocked from using their device. 
  • Communication Limits. Limit who can contact your child through iMessage, restrict who they can communicate with during Downtime, and prevent them from adding new contacts without your approval. 
  • Restrict iTunes & App Store Purchases. Not only will this parental control prevent any surprise bills, but it also means your child can’t download any apps (even free ones) without your permission. 

How BrightCanary can help with iPad monitoring

iPad parental controls offer a lot of protection, but monitoring what your child does on their iPad is equally vital. BrightCanary can help you with iPad monitoring. 

With BrightCanary, you get:

  • Advanced monitoring of everything your child types on their iPad across all apps and websites. 
  • Real-time alerts when your child types anything concerning. 
  • AI-powered insights and summaries.
  • Full transcripts of your child's activity when needed.

Plus, when your child is ready for an Apple Watch or iPhone, BrightCanary can help you monitor those, too.

In short

Kids face various dangers when using iPads, including exposure to inappropriate content and predators. It’s important to use iPad parental controls to help keep your child safe on their device. 

iPad monitoring is another important piece of the safety puzzle, and BrightCanary can help. BrightCanary monitors everything your child types on their iPad, so you can easily keep track of their activity across all apps. Download today and get started for free.

Kid using OpenAI's Sora on phone

Truthfully, when I downloaded Sora to test it for this article, I was already skeptical of the app. Everything I’d read made me apprehensive about this technology in the hands of children. 

In fact, Common Sense, a media watchdog that I look to as a parent, categorized the risk to kids from using Sora as unacceptable. What I discovered in my own testing did little to quash my concerns.

Harmful content, startlingly realistic fake Sora videos, and the ease with which your child’s likeness can be used by others to make videos are just a few of the dangers. This guide explains what Sora is, how AI-generated Sora videos work, why parents should be concerned, and what precautions you can take if your child uses the app.

What is Sora? 

Sora is the latest offering from OpenAI, the creators of ChatGPT. Here’s what you need to know about how Sora works:

  • From a text prompt, users can post AI-generated videos that range from hyper-realistic to absurd.
  • The app functions like TikTok, except the videos are entirely fake. 
  • Users can follow and friend others on the app. 
  • Users can upload a Cameo (a short snippet of video and audio) of themselves to use in videos. 
  • Depending on the permissions a user sets, other people can use their cameo to create videos.  
  • Videos created on Sora can be posted on other platforms. The videos are watermarked, but this can easily be removed with third-party software.   

Should parents be concerned about OpenAI’s Sora? 

Yes. Even though Sora has made safety improvements since its original launch, it’s still a dangerous place for kids. Here are the biggest risks: 

1. Blurring of the truth

Many Sora videos are extremely realistic, making it hard for kids to distinguish truth from fiction. That’s especially true when they’re shared on other, more trusted platforms. 

I was able to quickly generate realistic news clips announcing everything from hurricanes flattening Hawaii to the return of the military draft.

2. Your child’s likeness can be misused 

Sora’s Cameo feature lets users insert their face and voice into AI-generated videos. Sora has some safeguards to protect how your child’s likeness can be used, such as permission levels for who can use their Cameo, but these protections are easily bypassed.

That leaves your child with little control over what videos are made of them, and videos can be shared anywhere online.

3. Harmful content is easy to access

Content depicting violence, racism, disordered eating, and self-harm is plentiful on Sora. 

The content restrictions were stronger than I expected, but with clever phrasing, they can be bypassed. For example, when I typed the prompt “teen girl measuring herself,” it was flagged. But when I swapped “teen girl” for “young woman,” I got a video with body checking written all over it. 

Does Sora have parental controls? 

To their credit, OpenAI recently launched teen accounts, which include reduced exposure to sensitive content and stricter permissions for cameos. You can connect your ChatGPT account to your child’s to set parental controls. 

It’s a step in the right direction but has major gaps. Here’s what you can and can’t do with Sora’s parental controls: 

Strengths of Sora’s parental controls 

Parents can:

  1. Opt your child out of a personalized feed. This means that Sora doesn’t draw from your child’s ChatGPT records or Sora history to target videos, which helps prevent them from getting stuck in a dangerous algorithm
  2. Block your child from sending and receiving direct messages. (Adult accounts are automatically prevented from sending DMs to teen accounts.) 
  3. Turn off an uninterrupted content feed while your child scrolls. This step reduces endless content exposure.

These settings help, but they’re far from comprehensive.

Weaknesses of Sora’s parental controls 

  1. Insufficient age verification. Age verification on Sora is entirely self-reported. Parental controls mean nothing if a child can lie about their age and create an adult account. It also means that adults can lie about their age in order to message teens
  2. Parents can’t turn off their child’s feed. You can limit recommendations, but you can’t disable browsing or video generation.
  3. No ability to see your child’s activity on Sora. Parents cannot see what their child creates or watches on Sora. And unlike ChatGPT, if your child watches or creates something concerning, you won’t get an alert.  
  4. Parents can’t monitor who has their child’s cameo. If your child uploads their image to Sora, you have no insight into who is using it or how. 

How can I help my child use Sora safely?

My honest answer, as a parent and someone who writes about parenting in the digital era, is that there’s no safe way for a child to use OpenAI’s Sora. But your risk tolerance may be different. 

If you choose to let your child use Sora, here are steps you can take to help them do so more safely. 

  1. Utilize parental controls. Sora’s parental controls are insufficient, but better than nothing. Use them to their fullest. 
  2. Talk to your child about the risks. Educate them on how easy it is to fall for fake videos and the dangers of letting others use their likeness. 
  3. Use a third-party monitoring app. BrightCanary monitors everything your child types across all platforms, including Sora, and alerts you to any concerns. 

In short

Sora is an AI-powered video generation app and social media platform from OpenAI. Despite new protections, it remains unsafe for children. Harmful content, distortion of the truth, and a lack of control over how their likeness appears in videos are some of the reasons Sora is dangerous for kids. 

If you let your child use Sora, you should set parental controls, talk to them about the dangers, and use a third-party monitoring app like BrightCanary to stay informed about what they’re typing online. Download today to get started for free.

Instagram logo iconFacebook logo icontiktok logo iconYouTube logo iconLinkedIn logo icon
Be the most informed parent in the room.
Sign up for bimonthly digital parenting updates.
@2024 Tacita, Inc. All Rights Reserved.