
We recently covered account security threats: phishing, malware in game mods, fake stores, and subscription traps. Those scams steal accounts and charge cards. The threats in this article are worse. They target your child's reputation, mental health, and money. Here’s what parents need to know about the online scams targeting teenagers right now — and how to talk to your kids before they encounter them.
Threats covered in this article:
Sextortion is one of the most dangerous online threats facing teenagers today. Here’s how it works: scammers create fake profiles pretending to be a friend, classmate, or celebrity. They steer the conversation toward private photos, then threaten to share those images unless the teen pays up — or sends more.
The scale of the problem is alarming:
AI has made these scams significantly harder to detect. Voice cloning works with just seconds of audio from a social media video, and AI-generated profile photos are becoming very hard to spot. A skilled scammer can build a convincing fake identity faster than your kid can finish a homework assignment.
BrightCanary monitors what your child types across all apps, including messaging platforms where sextortion attempts begin. If a conversation raises red flags, you’ll see it in real time.
The FBI has warned that teens are using AI to alter ordinary photos of classmates to make them appear nude from just a single picture. No technical skill is required because free tools make it accessible to anyone.
Some shocking stats on AI-generated exploitation content:
Victims often stay silent because they fear they won't be believed or will lose their devices. That’s why it’s critical to talk to your kids about deepfakes before they encounter them.
Did you know that 43% of young adults get their news from TikTok, YouTube is among the top news sources for teens? These platforms optimize for engagement, not accuracy, and they have no reliable mechanism to verify whether a creator is a real person or an AI-generated character.
AI chatbots add another layer of risk: a NewsGuard study found leading chatbots gave false information 35% of the time on controversial topics. Your kid is getting confident-sounding information with no way to evaluate whether any of it is true.
Money mule schemes recruit teens with promises of quick cash: receive money in an account, forward it elsewhere, and keep a cut. It’s money laundering.
The FBI has documented teenagers recruited on social media and gaming platforms by criminals posing as IT service or gaming companies, asking kids to accept payments through Venmo, PayPal, or Cash App, keep a cut, then convert the rest to crypto. They are laundering fraud proceeds.
Adults convicted in money mule cases have received prison sentences and six-figure restitution orders. Minors are unlikely to face prison, but juvenile charges, a criminal record, and restitution payments are all on the table.
Kids get targeted with fake crypto giveaways, "send me $25 and I'll send back $100" flipping scams, and coaching to use a parent's credit card to buy crypto. Once the money is converted and sent, it is gone. But the bigger problem goes beyond traditional scams: meme coins.
Your teenager has heard of Dogecoin. Platforms like pump.fun let anyone create a new cryptocurrency in seconds. The creator hypes a token, waits for people to buy in, sells off, and disappears with the money. In November 2024, a 13-year-old did exactly that on a livestream, promoted a token called Gen Z Quant, dumped his holdings for $30,000, and flipped off the camera. He was not old enough to drive. However, he could run a classic commodity scam using the service.
Over 11.6 million crypto projects failed in 2025, according to CoinGecko. Solidus Labs research found that roughly 98.6% of tokens on pump.fun exhibited rug-pull or pump and dump behavior.
AI supercharges all of this. Scammers use it to generate polished websites, fake community engagement, and bot-driven hype that makes a worthless token look legitimate, including thousands of positive comments from accounts that did not exist yesterday and promo videos featuring people who are not real. A kid scrolling Discord or TikTok cannot tell genuine excitement from a manufactured pump.
Your kid does not need to be targeted by a scammer to lose money here. They can do it on their own by chasing a meme coin that looked exciting on TikTok and was worthless two days later.
These threats are evolving fast. AI has made them more convincing and harder to detect. Parental controls cannot filter a deepfake shared in a group chat or stop your teenager from buying a meme coin on their phone.
You are the child's best security feature. Device security and parental controls help, but they do not replace parenting. If you stay in the loop, your kid will show you the threatening DM instead of hiding it. They will ask about the crypto opportunity before spending money. They will tell you about the deepfake at school instead of suffering in silence.
Your job is to make sure your child knows that no matter what happens, no matter how embarrassed or scared they are, they can come to you for help. That message, delivered clearly and reinforced regularly, is still your powerful defense.
BrightCanary monitors everything your child types across all apps — including the messaging platforms and social media where sextortion attempts, money mule recruitment, and crypto scams begin. If a conversation raises red flags, you’ll see real-time alerts, AI-powered summaries, and emotional insights informed by APA guidelines. Download BrightCanary and start your free trial today.

