How to Reset Your Child’s Social Media Algorithm

By Andrea Nelson
October 19, 2023
Tween girls taking selfies together

As a parent, you want your child to surround themselves with good influences. That’s true not only for who they spend time with in real life, but also for the people and ideas they’re exposed to on social media. 

If you or your child are concerned about the content appearing in their feed, one beneficial step you can take is to help them reset their social media algorithm. Here’s how to reset your child’s algorithm on TikTok, Instagram, and other platforms.

What is a social media algorithm?

Social media algorithms are the complex computations that operate behind the scenes of every social media platform to determine what each user sees. 

Everything on your child’s social media feed is likely the result of something they liked, commented on, or shared. (For a more comprehensive explanation, check out our Parent’s Guide to Social Media Algorithms.)

Social media algorithms have a snowball effect. For example, if your child “likes” a cute dog video, they’ll likely see more of that type of content. However, if they search for topics like violence, adult material, or conspiracy theories, their feed can quickly be overwhelmed with negative content.

Therefore, it’s vital that parents actively examine and reset their child’s algorithm when needed, and also teach them the skills to evaluate it for themselves. 

Research clearly demonstrates the potentially negative impacts of social media on tweens and teens. How it affects your child depends a lot on what’s in their feed. And what’s in their feed has everything to do with algorithms. 

Talking to your child about their algorithm

Helping your child reset their algorithm is a wonderful opportunity to teach them digital literacy. Explain to them why it’s important to think critically about what they see on social media, and what they do on the site influences the content they’re shown. 

Here are some steps you can take together to clean up their feed: 

Start with their favorite app

Resetting all of your child’s algorithms in one fell swoop can be daunting. Instead, pick the app they use the most and tackle that first. 

Scroll through with them

If your kiddo follows a lot of accounts, you might need to break this step into multiple sessions. Pause on each account they follow and have them consider these questions:

  • Do this person’s posts usually make me feel unhappy or bad about myself? 
  • Does this account make me feel like I need to change who I am? 
  • Do I compare my life, body, or success with others when I view this account? 

If the answer “yes” to any of these questions, suggest they unfollow the account. If they’re hesitant — for example, if they’re worried unfollowing might cause friend problems — they can instead “hide” or “mute” the account so they don’t see those posts in their feed. 

Encourage interaction with positive accounts 

On the flip side, encourage your child to interact with accounts that make them feel good about themselves and portray positive messages. Liking, commenting, and sharing content that lifts them up will have a ripple effect on the rest of their feed. 

Dig into the settings 

After you’ve gone through their feed, show your child how to examine their settings. This mostly influences sponsored content, but considering the problematic history of advertisers marketing to children on social media, it’s wise to take a look.  

Every social media app has slightly different options for how much control users have over their algorithm. Here's what you should know about resetting the algorithm on popular apps your child might use.

How to reset Instagram algorithm

  • Go to Settings > Ads > Ad topics. You can view a list of all the categories advertisers can use to reach your child. Tap “See less” for ads you don’t want to see. 
  • Go to your child’s profile > tap Following > scroll through the categories to view (and unfollow) the accounts that appear most in your child’s feed.
  • Tap the Explore tab in the bottom navigation bar and encourage your child to search for new content that matches their interests, like cooking, animals, or TV shows.

How to reset TikTok algorithm

  • Go to Settings > Content Preferences > Refresh your For You feed. This is like a factory reset of your child’s TikTok algorithm.
  • Go to Settings > Free up space. Select “Clear” next to Cache. This will remove any saved data that could influence your child’s feed.
  • As your child uses TikTok, point out the “Not Interested” feature. Tap and hold a video to pull up this button. Tapping “Not interested” tells TikTok’s algorithm not to show your child videos they don’t like. 

How to reset YouTube algorithm

  • Go to Library > View All. Scroll back through everything your child has watched. You can manually remove any videos that your child doesn’t want associated with their algorithm — just then tap the three dots on the right side, then select Remove from watch history.
  • Go to Settings > History & Privacy. Tap “Clear watch history” for a full reset of your child’s YouTube algorithm.

What to watch for

To get the best buy-in and help your child form positive long-term content consumption habits, it’s best to let them take the lead in deciding what accounts and content they want to see. 

At the same time, kids shouldn't have to navigate the internet on their own. Social platforms can easily suggest content and profiles that your child isn't ready to see. A social media monitoring app, such as BrightCanary, can alert you if your child encounters something concerning.

Here are a few warning signs you should watch out for as you review your child's feed: 

If you spot any of this content, it’s time for a longer conversation to assess your child’s safety. You may decide it’s appropriate to insist they unfollow a particular account. And if what you see on your child’s feed makes you concerned for their mental health or worried they may harm themselves or others, consider reaching out to a professional.  

In short 

Algorithms are the force that drives everything your child sees on social media and can quickly cause their feed to be overtaken by negative content. Regularly reviewing your child’s feed with them and teaching them skills to control their algorithm will help keep their feed positive and minimize some of the negative impacts of social media. 

Woman smiling at phone while sitting on couch

Just by existing as a person in 2023, you’ve probably heard of social media algorithms. But what are algorithms? How do social media algorithms work? And why should parents care? 

At BrightCanary, we’re all about giving parents the tools and information they need to take a proactive role in their children’s digital life. So, we’ve created this guide to help you understand what social media algorithms are, how they impact your child, and what you can do about it. 

What is a social media algorithm? 

Social media algorithms are complex sets of rules and calculations used by platforms to prioritize the content that users see in their feeds. Each social network uses different algorithms. The algorithm on TikTok is different from the one on YouTube. 

In short, algorithms dictate what you see when you use social media and in what order. 

Why do social media sites use algorithms?

Back in the Wild Wild West days of social media, you would see all of the posts from everyone you were friends with or followed, presented in chronological order. 

But as more users flocked to social media and the amount of content ballooned, platforms started introducing algorithms to filter through the piles of content and deliver relevant and interesting content to keep their users engaged. The goal is to get users hooked and keep them coming back for more.  

Algorithms are also hugely beneficial for generating advertising revenue for platforms because they help target sponsored content. 

How do algorithms work? 

Each platform uses its own mix of factors, but here are some examples of what influences social media algorithms:

Friends/who you follow 

Most social media sites heavily prioritize showing users content from people they’re connected with on the platform. 

TikTok is unique because it emphasizes showing users new content based on their interests, which means you typically won’t see posts from people you follow on your TikTok feed. 

Your activity on the site

With the exception of TikTok, if you interact frequently with a particular user, you’re more likely to see their content in your feed. 

The algorithms on TikTok, Instagram Reels, and Instagram Explore prioritize showing you new content based on the type of posts and videos you engage with. For example, the more cute cat videos you watch, the more cute cat videos you’ll be shown. 

YouTube looks at the creators you interact with, your watch history, and the type of content you view to determine suggested videos. 

The popularity of a post or video 

The more likes, shares, and comments a post gets, the more likely it is to be shown to other users. This momentum is the snowball effect that causes posts to go viral. 

Why should parents care about algorithms? 

There are ways social media algorithms can benefit your child, such as creating a personalized experience and helping them discover new things related to their interests. But the drawbacks are also notable — and potentially concerning. 

Since social media algorithms show users more of what they seem to like, your child's feed might quickly become overwhelmed with negative content. Clicking a post out of curiosity or naivety, such as one promoting a conspiracy theory, can inadvertently expose your child to more such content. What may begin as innocent exploration could gradually influence their beliefs.

Experts frequently cite “thinspo” (short for “thinspiration”), a social media topic that aims to promote unhealthy body goals and disordered eating habits, as another algorithmic concern.

Even though most platforms ban content encouraging eating disorders, users often bypass filters using creative hashtags and abbreviations. If your child clicks on a thinspo post, they may continue to be served content that promotes eating disorders

Social media algorithm tips for parents

Although social media algorithms are something to monitor, the good news is that parents can help minimize the negative impacts on their child. 

Here are some tips:

Keep watch

It’s a good idea to monitor what the algorithm is showing your child so you can spot any concerning trends. Regularly sit down with them to look at their feed together. 

You can also use a parental monitoring service to alert you if your child consumes alarming content. BrightCanary is an app that continuously monitors your child’s social media activity and flags any concerning content, such as photos that promote self-harm or violent videos — so you can step in and talk about it.

Stay in the know

Keep up on concerning social media trends, such as popular conspiracy theories and internet challenges, so you can spot warning signs in your child’s feed. 

Communication is key

Talk to your child about who they follow and how those accounts make them feel. Encourage them to think critically about the content they consume and to disengage if something makes them feel bad. 

In short

Algorithms influence what content your child sees when they use social media. Parents need to be aware of the potentially harmful impacts this can have on their child and take an active role in combating the negative effects. 

Stay in the know about the latest digital parenting news and trends by subscribing to our weekly newsletter

Teen girl using social media and getting around age verification

Between 22% and 47% of kids use a fake age on social media. Although platforms like Instagram and YouTube now use Artificial Intelligence (AI) to help with age verification, kids are leveraging that same technology to bypass age gates.

This article explains the ways kids bypass age verification on YouTube, social media, and other online spaces. It also covers the risks to kids from faking their age on social media and strategies for parents to keep kids safe.   

How do kids bypass social media age verification? 

Children are resourceful creatures, and their strategies for bypassing age verification range from straightforward to straight-up inventive. 

1. Lying about their birth date

A startling number of platforms require nothing more than a user-provided date of birth. At BrightCanary, we review a lot of social media apps, and one thing we always assess is the strength of the age verification system. Flimsy measures like self-reported age practically invite children to lie about their age to gain access. 

2. Using video game characters 

If an app asks for a live selfie to verify age, some users snap a pic of a video game character. Games with hyper-realistic characters, like those from GTA V or The Last of Us, are popular choices, but so are less realistic games that allow users to pose characters and control their facial expressions. 

3. Submitting photos of actors 

Similar to the video game strategy, some kids take a picture of an actor on screen to submit for age verification, essentially using a celebrity face to pass as an adult.

4. Leveraging AI tools

Another way kids fool AI systems designed to verify age is by using AI. This can be with deepfakes of real people, generative AI images, or age progression apps like FaceApp. 

What happens when kids fake their age on social media

Here are some of the risks kids face when they falsify their age online:

  • Exposure to inappropriate material. When kids pretend to be older, they lose access to content filters. That means they can easily encounter explicit videos, mature themes, or harmful challenges. 
  • Contact with predators. Instagram teen accounts restrict direct messages to people the user is already connected with. A fake age allows predatory adults to contact them. Additionally, kids who falsify their age to join adult platforms risk being exposed to predators.   
  • Skewed algorithms. Social media and YouTube algorithms rely partially on demographic information such as age when recommending content. If a child pretends to be older than they are, they may be fed content intended for older users. 

How can I check if my child is faking their age online?

Take these important steps today to protect your child: 

1. Require permission to download apps

Both Apple and Android devices allow you to require permission for your child to download apps. This helps you stay in the loop about what platforms they're on so you can check to see if they’re using a fake age. 

2. Check their settings

Peek at your child’s app settings to see what age they’ve entered. You can find this in their profile or “About” section on most platforms.

3. Look at their profile. 

How your child presents themselves online can give you clues as to whether they’re pretending to be older than they actually are. Is their bio or posted content more mature than you’d expect for their age? That’s a red flag.

4. Use a monitoring app 

BrightCanary shows you which apps your child uses and what they’re typing, so you can detect potential age-faking or unsafe interactions early. And if your child is using apps you didn’t know about, you’ll be able to see them in the BrightCanary dashboard. 

How to talk to your child about faking their age online

Talking to your child about the importance of not faking their age online is a vital component of keeping them safe. Here are some tips to get you started:

Assume good intent

Partnering with your child is the most effective way to keep them safe online. When you discuss the importance of not faking their age, embrace this team mentality and assume their intentions are good. 

If you’re being proactive in bringing up the issue, be clear that it’s not that you don’t trust them; you’re simply trying to help them make good decisions. If you’ve already discovered they lied about their age, start by asking them why they made that choice and be clear that your goal is not punishment — it’s protection. 

Educate them on the risks 

Explain the dangers of falsifying their age online. When they bypass age verifications, most kids don’t realize the potential consequences. Arming them with this information can help them make safer choices for themselves. 

Foster open communication

Rather than a “one-and-done” approach, make online safety an ongoing conversation. If it’s a common, casual conversation topic, your child is more likely to be open with you about their behavior and to come to you if they misstep. 

In short

A large percentage of kids report lying about their age on YouTube and social media. As platforms use AI to increase their age-verification measures, kids are finding creative ways to bypass the new systems. Faking their age online exposes kids to a variety of risks. Parents need to take a proactive approach to monitoring their child online to ensure they aren’t lying about their age. 

BrightCanary helps you monitor your child’s activity online. Download today to get started for free.

Teen boy using smartphone

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • After mounting scrutiny and legal pressure, Character.ai is banning teens from its platform.
  • Internal Meta research shows that Instagram shows more body-focused content to vulnerable teens. 
  • Is your child a mega-fan of a celebrity or content creator? Save these tips on how to talk to your child about parasocial relationships and fandom.

Digital parenting

🤖 Character.ai to ban teens from talking to its AI chatbots: The chatbot platform recently announced that, beginning November 25, users under 18 won’t be allowed to interact with its online companions. The change comes after mounting scrutiny over how AI companions impact users’ mental health. In 2024, Character.ai was sued by the Setzer family, who accused the company of being responsible for his death. Character.ai also announced the rollout of new age verification measures and the funding of a new AI safety research lab.

Teens will still be able to use Character.ai to generate AI videos and images through specific prompts, and there’s no guarantee that the age verification measures will prevent teens from finding ways around them. If your teen uses AI companion apps: talk to them about the safety risks, use any available parental controls, and stay informed about how they interact with AI chatbots. And remember: for every app like Character.ai, there are countless others that aren’t taking the same steps to protect younger users.

Learn more about Character.ai on our blog, and use BrightCanary to monitor their interactions across every app they use — including AI. 

🚫 Instagram shows more disordered eating content to vulnerable teens: According to an internal document reviewed by Reuters, teens who said Instagram made them feel worse about their bodies were shown nearly three times more “eating disorder–adjacent” content. Posts included idealized body types, explicit judgment about appearance, and references to disordered eating.

Meta also admitted that their current safety systems failed to detect 98.5% of the sensitive material that likely shouldn’t have been shown to teens at all. While Meta says it’s now cutting teen exposure to age-restricted content by half and introducing a PG-13 standard for teen accounts, these findings highlight a major gap between company promises and real-world outcomes. 

Parents shouldn’t wait for algorithms to get it right. If your teen uses Instagram:

  • Make sure they have a teen account, which automatically applies stricter content settings. Review their account settings and make sure their feed filters are set to “less sensitive.”
  • Talk openly about how certain posts make them feel, encourage them to take social media breaks, and remind them that what they see isn’t real life. 
  • Monitor their social media and regularly check in about what they see, search, and send.


Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.


Tech talks

Let’s talk about fandoms and why your teen might feel really attached to someone they’ve never met. Whether it’s a YouTuber who “gets them,” a favorite pop star, or an AI companion that feels like a friend, these relationships can make kids feel seen and part of a community. But they can also blur the line between admiration and obsession.

Use these conversation-starters to help your teen think critically about their online relationships:

  1. “Who are some creators or celebrities you feel really connected to online? What do you like about them?”
  2. “What’s the difference between supporting someone you admire and feeling like you know them personally?”
  3. “Have you ever felt let down by someone you follow online? What happened?”
  4. “Do you think online creators have a responsibility to be good role models?”
  5. “Some influencers talk directly to fans like close friends. Why do you think that feels so real?”

What's catching our eye

👀 Elon Musk has launched Grokipedia, a crowdsourced online encyclopedia that is positioned as a rival to Wikipedia — but it’s still unclear how it works. Users have reported factual inconsistencies with Grokipedia’s articles, so now’s a good time to chat with your child about checking their sources.

😔 High schoolers are so scared of getting filmed that they’ve stopped dating. This piece from the Rolling Stone explains how the unchecked culture of public humiliation on social media is fueling mistrust among young men, making them hesitant to pursue relationships. 

👋 We share even more parenting tips and resources on our Instagram. Say hi!

Kidfluencer recording makeup video in front of camera

Influencers: They call us their bestie, show off their hauls, and model their fits. It may seem harmless, but what happens if your child wants to be one? In this article, we’ll go over what kidfluencing is, the risks, and how to keep your kidfluencer safe online.   

What is a kidfluencer? 

A kidfluencer is a child who creates content online with the goal of gaining followers, generating views, and often making money through brand deals or sponsorships. 

The term is a mashup of the words “kid” and “influencer.” While social media platforms like Instagram and TikTok technically require users to be 13 or older, many kidfluencers start much younger, with parents managing their accounts. 

Some of these children build audiences in the millions — but the spotlight can come with serious safety and mental health risks. 

What are the risks of kidfluencing?

Kidfluencing isn’t something that should be undertaken lightly. Here are the risks you need to know:  

  1. Predators. Many kidfluencers, particularly girls, have followings that include large numbers of men. In fact, investigations have shown that, when a user’s activity indicates they may be sexually interested in children, Instagram’s algorithm recommends additional child accounts for them to follow. 
  2. Identity issues. Even when they try to show their authentic selves online, it’s hard for influencers of any age to not let the feedback and engagement of their followers shape the image they put forth. For kidfluencers, this can make it difficult to maintain a strong sense of self and a healthy separation between their online personas and offline selves. 
  3. Exploitation. Men involved in child pornography often pose as photographers and social media professionals, offering to help kidfluencers and their parents grow their following.
  4. Images could end up anywhere. Your child’s digital footprint could follow them well into adulthood. The more public their account, the more likely that will happen. One frightening place some kidfluencers' images show up is as screenshots traded on Telegram channels dedicated to child sexual exploitation.
  5. Mental health issues. Similar to child stars, kidfluencers can experience burnout, depression, or anxiety. Constantly being “on” and chasing engagement can damage a child’s self-worth and emotional well-being.

What should I do if my child wants to be a kidfluencer? 

If your child wants to be a kidfluencer, take the time to carefully evaluate if it’s the right thing for them and for your family. Putting themselves online in such a public way is no small thing; it’s your job to help them make a sound decision. 

Here are some factors to consider:

1. Examine their motivations

Carefully evaluate if your child actually wants to be a kidfluencer, or if their motivation may be caused by subtle encouragement from peers or even other parents.

2. Talk about power and responsibility 

Kidfluencers have substantial influence over their young followers. Help your child understand their responsibility to be a positive role model. 

3. Establish firm boundaries 

Work with your child to decide what’s okay and what’s a no-go for their account. Consider:

  • What topics they can and can’t post about. 
  • Making sure the clothing they wear online is appropriate. 
  • Are there aspects of your family life that are off-limits? 
  • When can they be online and when do they need to shut down? 

4. Stay involved

To make sure your child stays safe, you should be involved in their account. That could mean your child creates the content but has no access to the account it’s posted on. It could also mean your child has some access, but you’re the only one who can access messages and control followers. 

5. Set realistic expectations

Some kidfluencers earn large amounts of money from their activity, but most don’t. Make sure you and your child both have realistic expectations for what might come from their efforts. If this is just a fun way for them to express themselves, do they really need to build an online presence, or can they just share videos with friends and family?

How to keep your kidfluencer safe online

If you and your child have talked through all the risks and decided to go ahead with their plan to be a kidfluencer, here are some steps you can take to help keep them safe:

  • Educate them on the dangers. Don’t shy away from telling your child about the dangers they face if they want to be a kidfluencer. Teach them how to spot scams and grooming
  • Teach them not to reveal personal information. Sharing their life online is a big part of what it means to be an influencer, but revealing personal information is very risky. Help your child find the balance between sharing and oversharing
  • Stay involved. Monitor your child’s kidfluencing activities by participating in their account management, sitting down with them to discuss and review their content plans, and using a monitoring app like BrightCanary

In short

Being a kidfluencer might sound exciting, but it also brings real risks, like predators, exploitation, and mental health problems. If your child wants to be a kidfluencer, it’s important to educate them on the potential dangers and take steps to protect them online.

BrightCanary can help you identify if your child is angling to be a kidfluencer. If your child searches for topics related to becoming the next big influencer or messages friends about their plans, you’ll be able to see it. And our AI-powered Ask the Canary can help you find the right words to talk to them about it. Download the app today to get started

FAQ

What is a kidfluencer?

A kidfluencer is a child under 18 who creates social media content to build an audience and often earns money through sponsorships or brand deals.

What are the risks of kidfluencing?

Risks include exposure to predators, exploitation, mental health challenges, and loss of privacy.

How can parents keep kidfluencers safe?

Parents should manage account access, monitor messages, set clear boundaries, and use tools like BrightCanary to oversee online activity.

Teen looking at social media apps on phone

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • 88% of parents have rules around screen time, 87% of teens use iPhones, and more of the latest stats on tweens, teens, and tech. 
  • Meta announces new parental controls for its AI chats, and Pinterest gives users a way to turn AI recommendations off. 
  • Why is AI slop overwhelming your child’s social media feed — and should they use Sora?

Digital parenting

📊 How tweens and teens use tech, by the numbers: Did you know that 42% of parents say could they do a better job managing their child’s screen time? That’s according to a new report by Pew Research Center. Here’s what the data showed:

  • 92% of parents say ease of contact is the key reason they let their child have a phone.
  • 88% of parents say they don’t let their child use a smartphone because their child might see inappropriate things online.
  • 57% of parents of an 11 or 12-year-old say their child has their own smartphone, compared with 29% of parents of an 8- to 10-year-old.
  • 86% of parents have rules around when, where or how their child can use screens, but just 55% say they stick to their screen time rules most of the time.

We also have new numbers about where kids spend their time online and what risks they face:

  • TikTok (46%) is the most used social media app for teens, followed by Instagram (31%) and Snapchat (14%).
  • Half of girls exposed to harmful content online with teens are twice as likely to see it on TikTok and X.
  • 94% of boys are online daily, and nearly three-quarters of boys 11 to 17 are regularly exposed to content about what it means to “be a man.” 

One thing that didn’t change from last year: 87% of teens own an iPhone. If you want a parental monitoring app that actually works on Apple devices, you need BrightCanary

🤖 Meta and Pinterest roll out updates to AI: Meta announced parental controls for its AI chat experiences, including the ability to turn off chats with AI characters for teens. Parents can also disable individual AI characters, review topics their teen discusses with Meta AI, and know that AI experiences are now PG-13 — which means they’ll allegedly avoid content with nudity, graphic content, or drug use. While these updates sound promising, you should stay involved with your child’s social media use, especially if they’re talking to AI companions.

Meanwhile, Pinterest rolled out a way for users to filter AI images out of their recommendations. It’s relatively common for generative AI images to end up in categories like fashion, beauty, and home decor, but this new setting maintains the human touch in what ends up on your child’s Pinterest feed. If they use Pinterest, we recommend walking them through how to find this feature in Settings > Refine Your Recommendations.

Want to learn how to protect your child from risky AI apps right now? Download our free AI Safety Toolkit for Parents. It includes step-by-step guidance for monitoring AI use and talking to your teen about AI.

🎥  AI slop takes over social media after OpenAI’s Sora launch: OpenAI’s new app, Sora, lets users create and remix short AI-generated videos … and upload their own faces so they can include them in skits. Experts warn this could make deepfakes harder to detect and open the door to harassment and misinformation (as well as copyright infringement). We’re working on a Sora guide for parents on the BrightCanary blog. What questions do you have about it?


Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.


Tech talks

It’s never been harder to tell what’s real online. Between AI videos, virtual friends, and algorithm-fed content, helping your teen think critically is key. Here are a few ways to start the conversation:

  1. “How can you tell if something you see online is real or AI-generated?”
  2. “Have you ever seen a video that looked real but wasn’t? How did you figure it out?”
  3. “Do you think AI should have rules about what it can say to kids?”
  4. “What’s a good way to double-check information before believing it?”
  5. “Do you think it’s okay for people to make videos of others without their consent?”

What's catching our eye

⚠️ That didn’t take long — experts warn that ChatGPT’s new parental controls are easy to bypass. A Washington Post columnist did it in minutes.

🐻 California Governor Newsom signed two key bills into law. SB 243 requires AI companion apps to prevent conversation about suicide, self-harm, and sexual contact with minors; clearly disclose when users are chatting with AI; and allow citizens to sue AI companies. AB 36 requires warning labels on social media platforms.

💡 Did you know? You can use BrightCanary to monitor your child’s Roblox chats on their iPhone and iPad. Here’s why we recommend monitoring Roblox.

Mom and daughter talking to each other on couch about AI apps

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • Learn how to monitor your child’s AI apps with our FREE AI safety toolkit for parents. 
  • Instagram is testing the ability to adjust your algorithm. We break down why this is a good feature to explore with your child.
  • We spoke with the team at Culture Reframed about parenting through the porn crisis, how to start conversations, and essential tips to keep kids safe. 

Digital Parenting

🤖 Free AI safety toolkit for parents: ChatGPT now has parental controls, but are they doing enough for parents? AI is everywhere in your child’s digital world. OpenAI recently launched Sora, a social network app filled with “hyperreal” AI-generated videos. (If your child uses Instagram, a version of this is already available in their app, called “Vibes.”) AI companion apps are having shockingly detailed and intimate conversations with kids. And who’s to say what the future holds for how kids use AI? 

Parents need better tools to monitor how their kids use AI today. That’s why we’re excited to bring you this free AI safety toolkit, created by the parents at BrightCanary. In it, you’ll find a cheat sheet of the most popular AI apps, a simple setup checklist to better protect your child, a quiz to evaluate your child’s AI safety, and more. Download the guide (free PDF) today.

Did you know? BrightCanary monitors every app your child uses, including what they type on ChatGPT, Character.ai, Meta AI, and more. Get 20% off BrightCanary Protection to monitor AI prompts and get concerning content alerts with code SAFETY20. 

🔄 Instagram testing ability to “tune” algorithms: In an Instagram post celebrating three billion monthly active users, CEO Adam Mosseri announced that users will be able to add and remove topics based on their interests. Instagram, like other platforms, uses an algorithm to determine what your child sees on their feed, based on the content they like, comment on, and share. But social media algorithms have a snowball effect. If they search for topics like violence, adult material, or conspiracy theories, they’ll see more negative content on their feed. 

Being able to add and remove specific topics means that your child can have more control over what they see and what’s recommended. In the meantime, periodically check out your child’s social media feeds together. And if their feed needs a clean-up, we’ve covered how to reset your child’s social media feeds — and how to talk to them about why that matters.

Tech Talks

Talking about AI doesn’t have to be awkward. These conversation starters come from our free AI safety toolkit for parents. Use these prompts to start the dialogue, and download the guide for even more safety tips. 

  1. “Do your friends use AI apps? What do they ask it?”
  2. “What’s the difference between talking to an AI and talking to a friend?” 
  3. “If an AI gave you advice that felt wrong, what would you do?”
  4. “Do you think AI can be trusted?”
  5. “What would you do if AI gave you a weird answer?”

What’s Catching Our Eye

💸 ChatGPT users will be able to use Instant Checkout to make purchases from Etsy and Shopify, all without having to leave the app — so, now’s a good time to talk to your child about purchase limits and why they shouldn’t use ChatGPT to buy their entire Christmas list. 

❤️‍🩹 October is National Bullying Prevention Month. What should you do if your child is getting bullied on social media? Save these tips

📱 One in five Americans regularly get their news from TikTok, a sharp increase from 2020.

Teen boy looking at porn on phone

Pornography is more accessible than ever, and kids are seeing it younger than most parents realize. Studies show the average age boys first see porn is just 9–11 years old. With mainstream porn sites delivering violent and degrading images for free, experts say pornography has become one of the biggest crises of the digital age.

We spoke with Dr. Gail Dines, Founder & CEO of Culture Reframed, and Dr. Mandy Sanchez, Director of Programming, about what parents need to know, how to start conversations, and what families, schools, and organizations can do together to protect kids.

What is Culture Reframed?

Culture Reframed is a global, science-based organization that equips parents, educators, and professionals to address the harms of pornography on youth. 

Through robust online courses, resources, and advocacy, they help ensure kids develop healthy, respectful, and egalitarian views of sex and intimacy. Every year, they support tens of thousands of families worldwide.

A conversation with Culture Reframed

What inspired the organization’s founding, and what is your mission today? 

Dr. Gail Dines: Most of us on the Culture Reframed (CR) team have been studying the effects of pornography on young people for many years. What galvanized us into founding CR is the way mainstream, free pornography has become so accessible to youth. Pornography has become the wallpaper of their lives.

In the absence of comprehensive sex education, young people are turning to pornography. The adolescent brain is especially vulnerable to such images because it is still in formation, and they don’t yet have a developed prefrontal cortex that allows for rational behavior. Young people, especially boys, are more likely to develop their sexual template and sexual scripts from pornography, which can lead to anxiety, depression, addiction, and sexual abuse of others

Our mission is to work to stop the emotional, behavioral, and sexual harms of pornography on young people. We have developed courses for parents, educators, and medical experts because these are the primary people tasked with protecting the well-being of young people. Education is a central part of our work, and our courses are unique in that they are science-based but accessible. 

What are some of the biggest misconceptions parents have about how pornography impacts young people? 

GD: One major misconception is “not my child.” If your child has a device, the question isn’t if they’ll see pornography, but when. Even if they’re not looking for it, the porn industry develops algorithms that target young people, often through social media platforms. 

Many parents are not aware of just how violent mainstream pornography is. We encourage parents to take a quick look at the major porn sites, such as Pornhub, so they can see what their kids are seeing. They most likely will be horrified.

Parents also need to become familiar with social media platforms such as Snapchat and TikTok because these can often become a gateway to pornography use. Studies show that these sites are full of pornographic images, as well as men trolling to groom kids into becoming a victim of sexual abuse. 

This is a lot to ask of parents, but given the nature of online life, it is as important as educating your child about the harms of drugs. Pornography has become one of the major crises of the digital age. 

Mandy Sanchez: The second misconception is that porn “is not that bad.” The fact is that most mainstream, online pornography is violent and degrading, depicting harmful stereotypes and unhealthy sexual scripts. There is more than four decades of scientific research that documents the social, emotional, behavioral, and cognitive harms of pornography to young people. 

Finally, parents often think there is nothing they can really do about their kiddos’ eminent exposure to pornography — and this misconception often precludes many parents from believing they have any control. But, the truth is, parents are perfectly positioned to help their children build resistance and resilience to pornography. 

By becoming knowledgeable, skilled, and confident to have critical conversations, parents can offer their kids an alternative script: healthy and safe messages about sex and relationships based on their age and stage of development. 

If a parent suspects their child has been exposed to porn, what’s the most important first step they should take? 

GD: Approach them without shame or blame. Young people feel shame (among other emotions) when watching pornography. The goal is to help them understand that it is not their fault, but rather the fault of a porn industry run amok, and the failure of policymakers to address the problem of easily accessible pornography. 

If your child has seen pornography, you need to have a calm, honest, and inviting conversation about the way they feel. They will be disturbed by the images, but often lack the vocabulary to put these feelings into words. Help them to think through the ways they feel and provide plenty of room for them to express themselves. You can ask questions, but don’t lecture your kids. 

Importantly, keep the conversations short. No young person wants to be sitting across from their parent talking about pornography, so make the conversations as inviting as possible. 

If you feel your child is developing problematic porn use, which involves behaviors such as isolation from peers and family, lack of sleep, excessive time spent online, and mood shifts, we recommend finding a therapist who specializes in problematic porn use among young people.

What practical tips would you give parents to start age-appropriate conversations with their kids about pornography and hypersexualized media?

MS: Educate. Compose. Communicate. Monitor. Report. 

First, I encourage parents to become knowledgeable about the harms of porn, how it shapes and influences young people, and how the industry is exposing them. 

Next, COMPOSE yourself in order to create the space for a calm, safe conversation. Remember to respond with empathy and care, instead of reacting with shame or blame. Aim for short, regular conversations that meet your kiddos where they are. And if you don’t know where they are or what they’re doing or feeling, ask! 

Be present and watch for warning signs that your kiddo may be struggling. Look for teachable moments in everyday media to educate kids about consent, body boundaries, digital safety and well-being, and safe, healthy behavior. 

Monitor connected devices with privacy settings and parental controls. 

Finally, report online exchanges involving child sexual abuse materials to the the National Center for Missing and Exploited Children online through the CyberTipline

What role do you think parents, schools, and organizations like Culture Reframed can play together in creating healthier digital spaces for kids? 

MS: Parents, schools, and organizations like Culture Reframed can work together to shift the cultural narratives about pornography, reframing the conversation around healthy, safe, connected relationships among young people. 

Research consistently shows that when we have porn-critical conversations with young people, risky behavior is reduced by 75%! 

When these groups unite to create and maintain healthier, safer digital spaces for young people, we become an unstoppable force. We can reduce porn’s harmful effects and provide the space for young people to develop authentic, healthy, safe, and rewarding relationships.

The bottom line

Pornography has become one of the defining crises of the digital age — and kids are on the front lines. Parents can’t rely on schools, platforms, or tech companies to protect their kids. It starts with open conversations, proactive monitoring, and supportive resources.

Parents don't have to do it alone. Culture Reframed offers science-based courses to help parents build resilience in their children against porn culture. And BrightCanary helps parents monitor what kids type across every app they use, so you can step in when it matters most.

Group of teens looking at smartphone

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:

  • Nearly a third of teens say chatting with an AI is as satisfying as talking to a person. Do you know how your child uses AI companion apps?
  • Sony rolled out new parental controls for PS4 and PS5 that you can start using today. 
  • What are the “rules” for reading your child’s text messages?

Digital Parenting

🤖 Nearly three-fourths of teens use AI companions: Does your teen have a secret girlfriend? What if she’s AI? According to Common Sense Media, 72% of teens ages 13–17 have used AI companions — apps designed specifically for emotional support, connection, and human-like interactions. Nearly a third said chatting with an AI felt at least as satisfying as talking to a person, and 10% said it felt more satisfying. 

But AI companies don’t have a good track record of keeping child safety in mind with these conversations. It wasn’t until recently that OpenAI announced that ChatGPT will stop talking about suicide with teens, and the FTC is demanding information from OpenAI, Snap, Meta, and other tech companies about the safety measures in place to protect kids that interact with their AI chatbots. 

Kids are having vulnerable, emotionally charged conversations with AI characters that aren’t designed with age-appropriate content filters, and kids are suffering because of it. If your child uses AI apps like Polybuzz, Character.ai, and ChatGPT, you need to stay informed about the content of their conversations, because it’s not always fun and games. 

We’re launching a new way to monitor your child’s AI apps in BrightCanary. You’ll be able to see not only what apps they’re using, but also what they’re sending and any red flags in their conversations. The update rolls out this week — stay tuned.

🎮 PlayStation debuts new parental control app: Good news if your kiddo is a gamer — Sony’s new PlayStation Family App offers robust parental controls and insights across PS4 and PS5. The app, available on iOS and Android, allows parents to see what games their kids are playing, approve extra playtime requests, restrict certain games, and customize privacy settings. Parents can also get real-time notifications when their kids are playing, as well as set playtime limits for each day of the week, among other features. The PlayStation Family App is available now. 

📱The rules for reading your teen’s text messages: Talk to five other parents, and you’ll get five different approaches to monitoring phones. Some parents spot-check their child’s texts, while others take a hands-off approach. What’s the “right” way? According to the experts, your best bet is to: 

  • Start early. Younger kids shouldn’t have free rein to message people or use their phone whenever they want.
  • Don’t sneak. Parental monitoring shouldn’t be a secret. Explain your concerns in an age-appropriate way, and make monitoring a requirement if they want their own device.
  • Have the hard conversations. If you find something inappropriate on your child’s phone, talk to them about it and let them know you can help in tricky situations. 
  • Give your kids more independence as they mature. Younger kids need a hands-on approach, while older kids are potentially more mature and independent. Do what works for your family.

Check out the full list at Good Housekeeping, and save these nine mistakes parents make with text message monitoring (we’re all victims of #8). 


Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.


Tech Talks

Teens are experimenting with AI companions for connection — sometimes instead of turning to friends or family. That’s why it’s important to talk about what these chats mean, what feels supportive, and what feels harmful. 

  1. “What do you think makes AI chats feel different from talking to a friend?”
  2. “If you had a hard day, who would you rather talk to — an AI or a real person? Why?”
  3. “Do you think AI always gives good advice?”
  4. “Have you ever seen an AI say something that felt wrong or unhelpful? How did you react?”
  5. “What do you think AI can do well, and what should always be left to people?”

What’s Catching Our Eye

💢 One way to get teens to listen to you: talk the talk and walk the walk. That’s according to a new study by an international team of researchers, which concluded that “the way teenagers receive their parents’ warnings depends less on the message itself and more on whether they see their parents as genuinely living up their own purported values.”

⏳ Oracle is calling dibs on TikTok. In the latest on TikTok’s fate in the US, the software giant Oracle will license TikTok’s algorithm. For now, things are status quo on your child’s favorite app to watch GRWM videos — President Trump extended the ban deadline another 120 days to allow time for the transaction to take place.

📚 “Digital literacy should be a part of every child's education, and today it must include AI literacy,” writes digital literacy educator and advocate Diana E. Graber.

Teen girl using Triller app on her phone

If you’re concerned about the safety of TikTok for your child and looking for alternatives, you might come across Triller in your search. But though the app frequently lands itself on lists of kid-safe alternatives to TikTok, it’s not without risk. 

So, is Triller safe for kids? In this article, we’ll take a look at the dangers of Triller, how it compares to TikTok, and why it’s not the safest choice for kids.

Is the Triller app safe for kids?

No, the Triller app is not safe for kids. Unlike TikTok, Triller has no parental controls, no age verification, and allows direct messages from strangers. The app also contains large amounts of inappropriate content, making it unsafe for children and younger teens.

What is the Triller app?

Triller is a short-form video platform similar to TikTok and Instagram Reels. It allows users to create and post videos and view other users’ content. 

Originally intended as a music video app, the platform has a broader range of music to choose from and a more overt focus on fame and gaining followers than other platforms. 

How does Triller work? 

Triller lets users film multiple takes of a video and then use the built-in AI editing tools to automatically select and combine the best clips to generate a slick-looking video. 

Like other social media platforms, Triller users can follow other creators and like and comment on videos. And similar to TikTok, Triller suggests videos for users, but unlike TikTok, which suggests videos based on a user’s watch history, Triller’s Discover page is based around promoted campaigns, top videos, and genre categories. 

Does Triller have parental controls? 

Not only does Triller have no parental controls, but it also lacks any form of age verification.

Why is Triller unsafe for kids?

Although Triller has a few (read, very few) safeguards in place, like the ability to set accounts to private, turn off data collection, and block users, the red-flags are plentiful. 

  • No age verification. Flimsy age verification is a problem for many social media apps (we’re looking at you, TikTok), but Triller doesn’t even pretend to try. 
  • No parental controls. None, zip, zilch, nada.  
  • DMs with no ability to limit messages to contacts. Social media platforms, especially those without the ability to restrict who can message you, are prime spots where predators target children. Triller doesn’t allow users to limit DMs to people they’re connected with, meaning anyone could text your child
  • Tons of inappropriate content. Triller is filled with content that’s inappropriate for kids, such as highly suggestive videos, profanity, and content promoting substance use. 
  • Location of videos can be revealed. Users can reveal the location where their content was filmed, opening kids up to serious safety concerns. 

Triller vs. TikTok: Which app is safer for kids?

All things considered, Triller is much less safe than TikTok. Here’s how the apps stack up: 


Feature
TikTokTriller
Parental controlsYesNo
Age verificationYes, easy to bypassNo
Limit direct messagesYesNo, must block users individually
Content moderationYes, but explicit content slips throughYes, but explicit content slips through
Explicit materialProhibited but commonProhibited but common
Community GuidelinesYes, but not kid-focusedYes, but not kid-focused

Final word: Is Triller safe for kids? 

Triller is a video-sharing app that has its sights set on competing with TikTok. But Triller is not safe for kids, including younger teens. With inappropriate content, a lack of parental controls and age verification, and no ability to limit who can message you, Triller is only appropriate for users 17 and older. 

BrightCanary helps parents monitor what their children type and search on the apps they use the most, including Triller and TikTok. Download today to get started for free.

Instagram logo iconFacebook logo icontiktok logo iconYouTube logo iconLinkedIn logo icon
Be the most informed parent in the room.
Sign up for bimonthly digital parenting updates.
@2024 Tacita, Inc. All Rights Reserved.