
Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
👉 Instagram will let your child pick what shows up in Reels: Instagram is doing something pretty unusual for a social media platform: explaining what’s under the hood. With a new feature called “Your Algorithm,” users can now see a summary of their recent interests and choose topics they want to see more or less of, like dialing up “jiu jitsu” and dialing down “AI cat videos.”
For parents, this product update is also a conversation-starter with your teen. Social media algorithms aren’t neutral. They learn from behavior, reward attention, and quietly shape what kids see day after day. This feature offers a rare moment to pause and scroll and ask:
Why do you think Instagram thinks this is your interest?
How do videos like this make you feel after watching them for a while?
What would you want to see more of (or less of) if you had the choice?
Our take: Tools like this don’t “fix” social media, but they do help kids understand that feeds are designed to hook you based on your interests. The more teens understand how algorithms work, the better equipped they are to use platforms intentionally instead of getting pulled along for the ride. For more on this, browse our parent’s guide to social media algorithms, and learn how to reset your child’s algorithm on popular platforms.
🎁 Thinking about a smartphone for the holidays? Read this first: If a phone is on your child’s holiday wishlist, new research suggests it’s worth waiting. A large study published in Pediatrics found that kids who got their first smartphone before age 13 had significantly worse health outcomes than peers without phones:
Additionally, a new study from the American Psychological Association now directly ties short form video content with significantly diminished mental health and poor attention spans.
The median age for getting a phone in the U.S. is now 11, which means many kids are entering middle school with a powerful device and very few guardrails. However, the takeaway from experts isn’t panic: it’s constraints. Use parental controls like Apple Screen Time to set restrictions on device use, and use a monitoring app like BrightCanary to stay informed about what your child encounters online.
One simple, high-impact step? Keep phones out of bedrooms overnight. It’s not a cure-all, but it’s one of the easiest ways to protect sleep and manage device boundaries, even if your child already has a phone.
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
A few questions to help kids think critically about feeds, phones, and habits:
📰 We were included in Wirecutter’s roundup of best parental control apps! Check us out under "Other parental control apps worth considering."
🚫 “It was kind of scary, because social media is so present in my life, and to think it could be taken away like that so suddenly felt weird.” Australia’s social media ban kicked in last week, effectively banning teens under age 16 from using Instagram, YouTube, TikTok, and other major platforms. Here’s how teens are responding.
🤖 Researchers warn that popular AI tools are offering dieting advice, tips for hiding disordered eating, and even generating hyper-personalized “thinspiration” images. Experts say this content can be especially dangerous for vulnerable teens — and much harder to spot than traditional social media posts.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
🇸🇪 Sweden pulls the plug on screens in childhood: Sweden — home of Spotify, Minecraft, and a very tasty Christmas soda called Julmust — has long embraced the idea that kids thrive with freedom. Many parents and educators extended that same philosophy to screens, allowing kids free digital access with limited oversight. In 2019, digital tools were even mandated in the national curriculum for 1- to 5-year-olds, fueled by concerns that Swedish kids would fall behind in an AI-driven future.
Then came the data. In 2022, Swedish 15-year-olds recorded their lowest math and reading scores in a decade, and more than a quarter performed poorly in math. The country’s education agency found that students who used digital media for things other than learning performed the worst. Additionally, nearly 9 in 10 teachers said smartphones were harming students’ learning, stamina, and attention spans.
Sweden course-corrected with its first-ever national screen time guidelines in 2024:
The country also banned phones from classrooms and boosted physical textbooks and library funding. Sweden’s actions illustrate that we can’t expect our kids to be prepared for the digital future if they don’t learn how to use those devices safely and responsibly. Following strict screen time limits is one method. The other is staying involved in what they do online and setting guardrails in and out of the home. What do you think about Sweden’s screen time guidelines?
🧸 OpenAI blocks toymaker after AI toy crosses the line with kids: AI toys are everywhere right now, but not all of them are safe for kids. A new report from the Public Interest Research Group found that some AI-enabled toys were quick to discuss inappropriate or dangerous topics with young children. One AI teddy bear gave minors instructions on how to light matches or find knives in the home, along with explicit advice. (OpenAI has since suspended the toymaker, FoloToy, following the investigation.)
“It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today,” report coauthor RJ Cross, director of PIRG’s Our Online Life Program, said in a new statement. “Removing one problematic product from the market is a good step, but far from a systemic fix.”
If you’re shopping for young children this season, advocacy groups urge parents to avoid buying AI-enabled toys right now.
🤖 Parents, turn on this new setting if your child uses TikTok: AI-generated videos, filters, and characters are flooding TikTok — but now you can control how much of it appears on your child’s For You page. TikTok’s new AI-generated content control lets users dial AI content up or down directly in settings. To adjust it, sit down with your child and go to their TikTok settings > Content Preferences > Manage Topics. Then, adjust the slider for AI-generated content (we recommend dialing this all the way down). TikTok is rolling out this new feature to accounts over the coming weeks.
Pair this with conversations about what’s real vs. AI-created — and why they shouldn’t always trust everything they see online. Not sure how to start the conversation? Check out our guide about how to talk to your kid about AI-generated deepfakes.
🏛️ Congress unveils major kids’ online safety package: The House Energy and Commerce Committee released 19 bills focused on protecting kids online. The package includes a revised version of KOSA, though without the broad “duty of care” language that sparked First Amendment concerns. Instead, platforms would need “reasonable policies and procedures” to address four harms: physical violence, sexual exploitation, drug/alcohol/tobacco-related risks, and financial harm and scams. Advocates say its progress, while critics say the new version of KOSA won’t do enough. We’ll keep you posted.
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Let’s talk about screen balance, Sweden-style. With new global conversations around screen time and well-being, this is a great week to check in with your child about how screens fit into their everyday life. Here are conversation-starters to help your kids reflect on their own habits:
📉 Social media breaks really do help. A new JAMA study found that young adults who took a one-week social media detox had lower depression, anxiety, and insomnia — especially those who struggled most beforehand. On average, symptoms of anxiety dropped by 16.1%; symptoms of depression by 24.8%; and symptoms of insomnia by 14.5%. The improvement was most pronounced in subjects with more severe depression.
🇦🇺 Meta begins shutting down under-16 accounts in Australia. Ahead of the country’s new teen social media ban, Meta is revoking access for users under 16 and blocking new accounts. Age verification remains the biggest challenge, though.
🔥 Oxford’s Word of the Year is … “rage bait.” Rage bait is defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media content.” Usage has tripled this year, as platforms struggle with content designed to provoke outrage for clicks — another reminder to help kids spot manipulation online.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
🤖 Character.ai to ban teens from talking to its AI chatbots: The chatbot platform recently announced that, beginning November 25, users under 18 won’t be allowed to interact with its online companions. The change comes after mounting scrutiny over how AI companions impact users’ mental health. In 2024, Character.ai was sued by the Setzer family, who accused the company of being responsible for his death. Character.ai also announced the rollout of new age verification measures and the funding of a new AI safety research lab.
Teens will still be able to use Character.ai to generate AI videos and images through specific prompts, and there’s no guarantee that the age verification measures will prevent teens from finding ways around them. If your teen uses AI companion apps: talk to them about the safety risks, use any available parental controls, and stay informed about how they interact with AI chatbots. And remember: for every app like Character.ai, there are countless others that aren’t taking the same steps to protect younger users.
Learn more about Character.ai on our blog, and use BrightCanary to monitor their interactions across every app they use — including AI.
🚫 Instagram shows more disordered eating content to vulnerable teens: According to an internal document reviewed by Reuters, teens who said Instagram made them feel worse about their bodies were shown nearly three times more “eating disorder–adjacent” content. Posts included idealized body types, explicit judgment about appearance, and references to disordered eating.
Meta also admitted that their current safety systems failed to detect 98.5% of the sensitive material that likely shouldn’t have been shown to teens at all. While Meta says it’s now cutting teen exposure to age-restricted content by half and introducing a PG-13 standard for teen accounts, these findings highlight a major gap between company promises and real-world outcomes.
Parents shouldn’t wait for algorithms to get it right. If your teen uses Instagram:
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Let’s talk about fandoms and why your teen might feel really attached to someone they’ve never met. Whether it’s a YouTuber who “gets them,” a favorite pop star, or an AI companion that feels like a friend, these relationships can make kids feel seen and part of a community. But they can also blur the line between admiration and obsession.
Use these conversation-starters to help your teen think critically about their online relationships:
👀 Elon Musk has launched Grokipedia, a crowdsourced online encyclopedia that is positioned as a rival to Wikipedia — but it’s still unclear how it works. Users have reported factual inconsistencies with Grokipedia’s articles, so now’s a good time to chat with your child about checking their sources.
😔 High schoolers are so scared of getting filmed that they’ve stopped dating. This piece from the Rolling Stone explains how the unchecked culture of public humiliation on social media is fueling mistrust among young men, making them hesitant to pursue relationships.
👋 We share even more parenting tips and resources on our Instagram. Say hi!

We’re thrilled to announce that BrightCanary is now available in Canada. Parents across the country can now download the app from the Canadian App Store to help protect their children online.
BrightCanary uses advanced AI technology to monitor what kids type across the apps they use most, from messaging platforms like Discord and WhatsApp to social media and web searches. Parents receive real-time alerts, summaries, and emotional insights so they can step in when their child encounters concerning content like cyberbullying, sexting, or online predators.
“We’re excited to bring BrightCanary to Canadian parents,” said Karl Stillner, CEO of BrightCanary. “Our mission has always been to help families navigate the digital world safely. Expanding to Canada means more parents can protect their kids online while fostering healthy conversations about technology.”
From Toronto to Vancouver, kids are spending more time online than ever before — and with that comes new risks. Whether it’s exposure to inappropriate content, toxic group chats, or unsafe strangers, parents need tools that help them stay informed without reading every single message.
BrightCanary makes that possible by:
Canadian parents now have access to the same powerful monitoring that has already helped thousands of U.S. families feel more confident about their child’s digital safety. BrightCanary empowers parents to stay connected, protect their kids online, and encourage healthy digital habits.
Parents can get started with AI-powered monitoring and choose the plan that best meets their family’s needs. Download BrightCanary in the App Store today and start your free trial.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
📊 How tweens and teens use tech, by the numbers: Did you know that 42% of parents say could they do a better job managing their child’s screen time? That’s according to a new report by Pew Research Center. Here’s what the data showed:
We also have new numbers about where kids spend their time online and what risks they face:
One thing that didn’t change from last year: 87% of teens own an iPhone. If you want a parental monitoring app that actually works on Apple devices, you need BrightCanary.
🤖 Meta and Pinterest roll out updates to AI: Meta announced parental controls for its AI chat experiences, including the ability to turn off chats with AI characters for teens. Parents can also disable individual AI characters, review topics their teen discusses with Meta AI, and know that AI experiences are now PG-13 — which means they’ll allegedly avoid content with nudity, graphic content, or drug use. While these updates sound promising, you should stay involved with your child’s social media use, especially if they’re talking to AI companions.
Meanwhile, Pinterest rolled out a way for users to filter AI images out of their recommendations. It’s relatively common for generative AI images to end up in categories like fashion, beauty, and home decor, but this new setting maintains the human touch in what ends up on your child’s Pinterest feed. If they use Pinterest, we recommend walking them through how to find this feature in Settings > Refine Your Recommendations.
Want to learn how to protect your child from risky AI apps right now? Download our free AI Safety Toolkit for Parents. It includes step-by-step guidance for monitoring AI use and talking to your teen about AI.
🎥 AI slop takes over social media after OpenAI’s Sora launch: OpenAI’s new app, Sora, lets users create and remix short AI-generated videos … and upload their own faces so they can include them in skits. Experts warn this could make deepfakes harder to detect and open the door to harassment and misinformation (as well as copyright infringement). We’re working on a Sora guide for parents on the BrightCanary blog. What questions do you have about it?
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
It’s never been harder to tell what’s real online. Between AI videos, virtual friends, and algorithm-fed content, helping your teen think critically is key. Here are a few ways to start the conversation:
⚠️ That didn’t take long — experts warn that ChatGPT’s new parental controls are easy to bypass. A Washington Post columnist did it in minutes.
🐻 California Governor Newsom signed two key bills into law. SB 243 requires AI companion apps to prevent conversation about suicide, self-harm, and sexual contact with minors; clearly disclose when users are chatting with AI; and allow citizens to sue AI companies. AB 36 requires warning labels on social media platforms.
💡 Did you know? You can use BrightCanary to monitor your child’s Roblox chats on their iPhone and iPad. Here’s why we recommend monitoring Roblox.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
🤖 Free AI safety toolkit for parents: ChatGPT now has parental controls, but are they doing enough for parents? AI is everywhere in your child’s digital world. OpenAI recently launched Sora, a social network app filled with “hyperreal” AI-generated videos. (If your child uses Instagram, a version of this is already available in their app, called “Vibes.”) AI companion apps are having shockingly detailed and intimate conversations with kids. And who’s to say what the future holds for how kids use AI?
Parents need better tools to monitor how their kids use AI today. That’s why we’re excited to bring you this free AI safety toolkit, created by the parents at BrightCanary. In it, you’ll find a cheat sheet of the most popular AI apps, a simple setup checklist to better protect your child, a quiz to evaluate your child’s AI safety, and more. Download the guide (free PDF) today.
Did you know? BrightCanary monitors every app your child uses, including what they type on ChatGPT, Character.ai, Meta AI, and more. Get 20% off BrightCanary Protection to monitor AI prompts and get concerning content alerts with code SAFETY20.
🔄 Instagram testing ability to “tune” algorithms: In an Instagram post celebrating three billion monthly active users, CEO Adam Mosseri announced that users will be able to add and remove topics based on their interests. Instagram, like other platforms, uses an algorithm to determine what your child sees on their feed, based on the content they like, comment on, and share. But social media algorithms have a snowball effect. If they search for topics like violence, adult material, or conspiracy theories, they’ll see more negative content on their feed.
Being able to add and remove specific topics means that your child can have more control over what they see and what’s recommended. In the meantime, periodically check out your child’s social media feeds together. And if their feed needs a clean-up, we’ve covered how to reset your child’s social media feeds — and how to talk to them about why that matters.
Talking about AI doesn’t have to be awkward. These conversation starters come from our free AI safety toolkit for parents. Use these prompts to start the dialogue, and download the guide for even more safety tips.
💸 ChatGPT users will be able to use Instant Checkout to make purchases from Etsy and Shopify, all without having to leave the app — so, now’s a good time to talk to your child about purchase limits and why they shouldn’t use ChatGPT to buy their entire Christmas list.
❤️🩹 October is National Bullying Prevention Month. What should you do if your child is getting bullied on social media? Save these tips.
📱 One in five Americans regularly get their news from TikTok, a sharp increase from 2020.

AI is everywhere. From ChatGPT to Snapchat’s My AI, Character.ai, and even Instagram’s Meta AI, chances are your child has already encountered an artificial intelligence chatbot.
Some of these tools can be fun and helpful. But we’ve also seen kids use AI in ways that are unsafe, unhealthy, or emotionally risky. Parents can’t wait for platforms to catch up. Kids need protection now.
That’s where this toolkit comes in.
Inside, you’ll find:
This guide is here to help you feel prepared, not overwhelmed. Use this guide at the dinner table or your next tech check-in.
Bonus: Get 20% off BrightCanary Protection to monitor AI prompts and get alerts. Use code SAFETY20 at checkout.
AI companions and chat tools are built into the apps kids already use. They can be helpful for brainstorming and study practice, but they can also surface inaccurate info, deliver age-inappropriate content, or encourage over-reliance.
This toolkit gives you the words, the rules, and the settings to keep kids safer while they learn how to use AI responsibly.
BrightCanary monitors what your child types across apps — including AI chats — and sends real-time alerts for concerning content, plus concise summaries so you don’t have to read every message.
What does the promo code cover?
Use SAFETY20 for 20% off the annual BrightCanary Protection plan to monitor AI usage.
Is the AI Safety Toolkit free?
Yes. The PDF is free to download and share with caregivers and schools.
Do I need BrightCanary to use the toolkit?
No. The toolkit stands alone. BrightCanary adds real-time monitoring and alerts if you want ongoing protection.
Your kid’s relationship with AI will be shaped right now. A clear plan, a few safety settings, and calm conversations go a long way.
Download the AI Safety Toolkit (PDF).

Pornography is more accessible than ever, and kids are seeing it younger than most parents realize. Studies show the average age boys first see porn is just 9–11 years old. With mainstream porn sites delivering violent and degrading images for free, experts say pornography has become one of the biggest crises of the digital age.
We spoke with Dr. Gail Dines, Founder & CEO of Culture Reframed, and Dr. Mandy Sanchez, Director of Programming, about what parents need to know, how to start conversations, and what families, schools, and organizations can do together to protect kids.
Culture Reframed is a global, science-based organization that equips parents, educators, and professionals to address the harms of pornography on youth.
Through robust online courses, resources, and advocacy, they help ensure kids develop healthy, respectful, and egalitarian views of sex and intimacy. Every year, they support tens of thousands of families worldwide.
What inspired the organization’s founding, and what is your mission today?
Dr. Gail Dines: Most of us on the Culture Reframed (CR) team have been studying the effects of pornography on young people for many years. What galvanized us into founding CR is the way mainstream, free pornography has become so accessible to youth. Pornography has become the wallpaper of their lives.
In the absence of comprehensive sex education, young people are turning to pornography. The adolescent brain is especially vulnerable to such images because it is still in formation, and they don’t yet have a developed prefrontal cortex that allows for rational behavior. Young people, especially boys, are more likely to develop their sexual template and sexual scripts from pornography, which can lead to anxiety, depression, addiction, and sexual abuse of others.
Our mission is to work to stop the emotional, behavioral, and sexual harms of pornography on young people. We have developed courses for parents, educators, and medical experts because these are the primary people tasked with protecting the well-being of young people. Education is a central part of our work, and our courses are unique in that they are science-based but accessible.
What are some of the biggest misconceptions parents have about how pornography impacts young people?
GD: One major misconception is “not my child.” If your child has a device, the question isn’t if they’ll see pornography, but when. Even if they’re not looking for it, the porn industry develops algorithms that target young people, often through social media platforms.
Many parents are not aware of just how violent mainstream pornography is. We encourage parents to take a quick look at the major porn sites, such as Pornhub, so they can see what their kids are seeing. They most likely will be horrified.
Parents also need to become familiar with social media platforms such as Snapchat and TikTok because these can often become a gateway to pornography use. Studies show that these sites are full of pornographic images, as well as men trolling to groom kids into becoming a victim of sexual abuse.
This is a lot to ask of parents, but given the nature of online life, it is as important as educating your child about the harms of drugs. Pornography has become one of the major crises of the digital age.
Mandy Sanchez: The second misconception is that porn “is not that bad.” The fact is that most mainstream, online pornography is violent and degrading, depicting harmful stereotypes and unhealthy sexual scripts. There is more than four decades of scientific research that documents the social, emotional, behavioral, and cognitive harms of pornography to young people.
Finally, parents often think there is nothing they can really do about their kiddos’ eminent exposure to pornography — and this misconception often precludes many parents from believing they have any control. But, the truth is, parents are perfectly positioned to help their children build resistance and resilience to pornography.
By becoming knowledgeable, skilled, and confident to have critical conversations, parents can offer their kids an alternative script: healthy and safe messages about sex and relationships based on their age and stage of development.
If a parent suspects their child has been exposed to porn, what’s the most important first step they should take?
GD: Approach them without shame or blame. Young people feel shame (among other emotions) when watching pornography. The goal is to help them understand that it is not their fault, but rather the fault of a porn industry run amok, and the failure of policymakers to address the problem of easily accessible pornography.
If your child has seen pornography, you need to have a calm, honest, and inviting conversation about the way they feel. They will be disturbed by the images, but often lack the vocabulary to put these feelings into words. Help them to think through the ways they feel and provide plenty of room for them to express themselves. You can ask questions, but don’t lecture your kids.
Importantly, keep the conversations short. No young person wants to be sitting across from their parent talking about pornography, so make the conversations as inviting as possible.
If you feel your child is developing problematic porn use, which involves behaviors such as isolation from peers and family, lack of sleep, excessive time spent online, and mood shifts, we recommend finding a therapist who specializes in problematic porn use among young people.
What practical tips would you give parents to start age-appropriate conversations with their kids about pornography and hypersexualized media?
MS: Educate. Compose. Communicate. Monitor. Report.
First, I encourage parents to become knowledgeable about the harms of porn, how it shapes and influences young people, and how the industry is exposing them.
Next, COMPOSE yourself in order to create the space for a calm, safe conversation. Remember to respond with empathy and care, instead of reacting with shame or blame. Aim for short, regular conversations that meet your kiddos where they are. And if you don’t know where they are or what they’re doing or feeling, ask!
Be present and watch for warning signs that your kiddo may be struggling. Look for teachable moments in everyday media to educate kids about consent, body boundaries, digital safety and well-being, and safe, healthy behavior.
Monitor connected devices with privacy settings and parental controls.
Finally, report online exchanges involving child sexual abuse materials to the the National Center for Missing and Exploited Children online through the CyberTipline.
What role do you think parents, schools, and organizations like Culture Reframed can play together in creating healthier digital spaces for kids?
MS: Parents, schools, and organizations like Culture Reframed can work together to shift the cultural narratives about pornography, reframing the conversation around healthy, safe, connected relationships among young people.
Research consistently shows that when we have porn-critical conversations with young people, risky behavior is reduced by 75%!
When these groups unite to create and maintain healthier, safer digital spaces for young people, we become an unstoppable force. We can reduce porn’s harmful effects and provide the space for young people to develop authentic, healthy, safe, and rewarding relationships.
Pornography has become one of the defining crises of the digital age — and kids are on the front lines. Parents can’t rely on schools, platforms, or tech companies to protect their kids. It starts with open conversations, proactive monitoring, and supportive resources.
Parents don't have to do it alone. Culture Reframed offers science-based courses to help parents build resilience in their children against porn culture. And BrightCanary helps parents monitor what kids type across every app they use, so you can step in when it matters most.

Welcome to Parent Pixels, a parenting newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. This week:
🤖 Nearly three-fourths of teens use AI companions: Does your teen have a secret girlfriend? What if she’s AI? According to Common Sense Media, 72% of teens ages 13–17 have used AI companions — apps designed specifically for emotional support, connection, and human-like interactions. Nearly a third said chatting with an AI felt at least as satisfying as talking to a person, and 10% said it felt more satisfying.
But AI companies don’t have a good track record of keeping child safety in mind with these conversations. It wasn’t until recently that OpenAI announced that ChatGPT will stop talking about suicide with teens, and the FTC is demanding information from OpenAI, Snap, Meta, and other tech companies about the safety measures in place to protect kids that interact with their AI chatbots.
Kids are having vulnerable, emotionally charged conversations with AI characters that aren’t designed with age-appropriate content filters, and kids are suffering because of it. If your child uses AI apps like Polybuzz, Character.ai, and ChatGPT, you need to stay informed about the content of their conversations, because it’s not always fun and games.
We’re launching a new way to monitor your child’s AI apps in BrightCanary. You’ll be able to see not only what apps they’re using, but also what they’re sending and any red flags in their conversations. The update rolls out this week — stay tuned.
🎮 PlayStation debuts new parental control app: Good news if your kiddo is a gamer — Sony’s new PlayStation Family App offers robust parental controls and insights across PS4 and PS5. The app, available on iOS and Android, allows parents to see what games their kids are playing, approve extra playtime requests, restrict certain games, and customize privacy settings. Parents can also get real-time notifications when their kids are playing, as well as set playtime limits for each day of the week, among other features. The PlayStation Family App is available now.
📱The rules for reading your teen’s text messages: Talk to five other parents, and you’ll get five different approaches to monitoring phones. Some parents spot-check their child’s texts, while others take a hands-off approach. What’s the “right” way? According to the experts, your best bet is to:
Check out the full list at Good Housekeeping, and save these nine mistakes parents make with text message monitoring (we’re all victims of #8).
Parent Pixels is a biweekly newsletter filled with practical advice, news, and resources to support you and your kids in the digital age. Want this newsletter delivered to your inbox a day early? Subscribe here.
Teens are experimenting with AI companions for connection — sometimes instead of turning to friends or family. That’s why it’s important to talk about what these chats mean, what feels supportive, and what feels harmful.
💢 One way to get teens to listen to you: talk the talk and walk the walk. That’s according to a new study by an international team of researchers, which concluded that “the way teenagers receive their parents’ warnings depends less on the message itself and more on whether they see their parents as genuinely living up their own purported values.”
⏳ Oracle is calling dibs on TikTok. In the latest on TikTok’s fate in the US, the software giant Oracle will license TikTok’s algorithm. For now, things are status quo on your child’s favorite app to watch GRWM videos — President Trump extended the ban deadline another 120 days to allow time for the transaction to take place.
📚 “Digital literacy should be a part of every child's education, and today it must include AI literacy,” writes digital literacy educator and advocate Diana E. Graber.

Artificial intelligence (AI) apps are everywhere, and teens are using them more than ever. From ChatGPT to Character.ai to Snapchat’s My AI, these chatbots are marketed as harmless, but they can expose kids to serious risks like dependency, inappropriate content, and even harmful advice.
Parents are asking: what parental controls exist for AI apps? And more importantly: what’s the best way to monitor them?
This guide breaks down:
The best parental monitor for AI apps is BrightCanary. Unlike app-based parental controls, BrightCanary works across every app your child uses — including AI chatbots.
While built-in parental controls for AI apps are minimal, BrightCanary gives parents a simple way to monitor all AI interactions in one place.
AI safety has become a major issue for parents and lawmakers.
The Federal Trade Commission is investigating the effects of AI chatbots on children. The FTC’s goal is to limit potential negative effects and inform users and parents of their risks, demanding child safety information from OpenAI, Meta, Alphabet, Snap, xAI, and other tech companies.
The news comes after a slew of stories about teenagers committing suicide or facing extreme mental health challenges after having extended conversations with AI chatbots, such as Sewell Setzer, a 14-year-old boy who killed himself following months of romantic conversations with AI characters on the platform Character.ai.
In August 2025, Matthew and Maria Raine brought the first wrongful death suit against OpenAI. They alleged that ChatGPT “coached” their son Adam Raine into suicide over several months.
“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” said Megan Garcia, Setzer’s mom, in a recent testimony before the Senate Judiciary Committee.
Odds are, your teen is likely chatting with AI. According to Common Sense Media, around 70% of teens are using AI companions, but only 37% of parents know that their kids are using the apps.
Here’s an overview of the existing parental controls for popular AI apps your child might use:
Every platform handles parental controls differently, and some offer none at all. Additionally, teens can fake their ages, use private browsers, or download new AI apps parents haven’t heard of — all of which can negate the parental controls on AI platforms.
It’s an unfortunate truth that AI companies didn’t start to roll out meaningful parental controls until tragedies made headlines. Parents need to stay informed and involved, but kids can fall into the digital danger zone faster than they might think.
BrightCanary monitors every app your child uses, including AI companions. The child safety software just released a new feature that specifically shows you every AI app your child is using and summaries of their activity.
Kids use AI for a range of activities, like entertainment and homework help — but if they’re using AI inappropriately, you’ll get an alert in real-time. It all starts with a keyboard that you install on your child’s iPhone. It’s easy to set up, and it analyzes what your child types across every app: messaging, social media, forums, and AI.
Unlike app-by-app controls, BrightCanary is a unified parental monitor for AI that adapts to whatever platform your child is using.
Even with BrightCanary, it’s important to pair monitoring with parenting strategies:
AI companions like ChatGPT, Character.ai, and Snapchat’s My AI are widely used by teens, but built-in parental controls are minimal and inconsistent.
The best parental monitor for AI apps is BrightCanary, which works across every app, provides real-time alerts, and helps you understand your child’s conversations with AI through summaries and emotional insights.
Protect your child today. Download BrightCanary and get started for free.