Sextortion, Deepfakes, and Crypto Scams: Online Threats Targeting Teenagers

By Bill Green, CFE, CISA
March 23, 2026
Teen girl looking at phone

We recently covered account security threats: phishing, malware in game mods, fake stores, and subscription traps. Those scams steal accounts and charge cards. The threats in this article are worse. They target your child's reputation, mental health, and money. Here’s what parents need to know about the online scams targeting teenagers right now — and how to talk to your kids before they encounter them. 

Threats covered in this article:

  • Sextortion and impersonation scams
  • Deepfake bullying and AI-generated explicit images
  • Misinformation and AI-generated content
  • Money mule recruitment
  • Crypto scams and meme coin fraud

Sextortion and impersonation: What parents should know

Sextortion is one of the most dangerous online threats facing teenagers today. Here’s how it works: scammers create fake profiles pretending to be a friend, classmate, or celebrity. They steer the conversation toward private photos, then threaten to share those images unless the teen pays up — or sends more.

The scale of the problem is alarming:

  • In 2024, NCMEC received more than 465,000 reports of online enticement (including sextortion), a 192% increase from 2023. 
  • Financial sextortion is driving the surge at nearly 100 reports per day. 
  • Victims are mostly teenage boys.
  • At least 36 teenage boys have died by suicide after being victimized since 2021,  and those are only the reported cases. Most victims never report it at all. 

AI has made these scams significantly harder to detect. Voice cloning works with just seconds of audio from a social media video, and AI-generated profile photos are becoming very hard to spot. A skilled scammer can build a convincing fake identity faster than your kid can finish a homework assignment.

What to do if your child is targeted

  • Make your rules clear: if anyone asks for pictures, pressures them to meet up, or makes a threat, tell an adult immediately.
  • Do not pay.
  • Preserve the messages. Blocking the account before saving evidence is a common mistake.
  • Report to the platform and, if your child is in immediate danger, to law enforcement.  The FBI’s Internet Crime Complaint Center (IC3) and NCMEC’s CyberTipline both accept reports.

BrightCanary monitors what your child types across all apps, including messaging platforms where sextortion attempts begin. If a conversation raises red flags, you’ll see it in real time.

Deepfakes and misinformation

The FBI has warned that teens are using AI to alter ordinary photos of classmates to make them appear nude from just a single picture. No technical skill is required because free tools make it accessible to anyone. 

Some shocking stats on AI-generated exploitation content:

  • NCMEC tracked 4,700 reports of AI-generated child sexual exploitation content in 2023, 67,000 in 2024, and 440,000 in just the first six months of 2025. 
  • A RAND survey found that 22% of high school principals and 20% of middle school principals reported deepfake bullying incidents during the 2023–2025 school years. 
  • One in five secondary schools has dealt with deepfake bullying incidents.

Victims often stay silent because they fear they won't be believed or will lose their devices. That’s why it’s critical to talk to your kids about deepfakes before they encounter them. 

What parents can do

  • If your child is targeted, document everything but do not download the images.
  • Report to the platform and the school. 
  • If your child has created deepfakes of others, address it immediately. Creating and distributing fake explicit images of minors is a criminal offense.

Misinformation: Why your teen can’t tell what’s real

Did you know that 43% of young adults get their news from TikTok, YouTube is among the top news sources for teens? These platforms optimize for engagement, not accuracy, and they have no reliable mechanism to verify whether a creator is a real person or an AI-generated character.  

AI chatbots add another layer of risk: a NewsGuard study found leading chatbots gave false information 35% of the time on controversial topics. Your kid is getting confident-sounding information with no way to evaluate whether any of it is true. 

What parents can do

  • You cannot filter misinformation like you filter explicit content. This requires teaching digital literacy, not installing a setting.
  • Teach your child to question the source. Find if the underlying facts are true. 
  • Model the habit yourself. Let your kids see you fact-checking before sharing something.

Money mule recruitment: “Easy money” or a criminal record?

Money mule schemes recruit teens with promises of quick cash: receive money in an account, forward it elsewhere, and keep a cut. It’s money laundering. 

The FBI has documented teenagers recruited on social media and gaming platforms by criminals posing as IT service or gaming companies, asking kids to accept payments through Venmo, PayPal, or Cash App, keep a cut, then convert the rest to crypto. They are laundering fraud proceeds.

Adults convicted in money mule cases have received prison sentences and six-figure restitution orders. Minors are unlikely to face prison, but juvenile charges, a criminal record, and restitution payments are all on the table. 

What parents can do

  • Teach kids that no legitimate job uses their personal accounts to move money for strangers.
  • If your child mentions an opportunity that involves receiving and forwarding money, take it seriously and investigate before they do anything.
  • Watch for recruitment attempts on gaming platforms and Discord, where these schemes are increasingly common.

Crypto scams and meme coins

Kids get targeted with fake crypto giveaways, "send me $25 and I'll send back $100" flipping scams, and coaching to use a parent's credit card to buy crypto. Once the money is converted and sent, it is gone. But the bigger problem goes beyond traditional scams: meme coins.

Your teenager has heard of Dogecoin. Platforms like pump.fun let anyone create a new cryptocurrency in seconds. The creator hypes a token, waits for people to buy in, sells off, and disappears with the money. In November 2024, a 13-year-old did exactly that on a livestream, promoted a token called Gen Z Quant, dumped his holdings for $30,000, and flipped off the camera. He was not old enough to drive. However, he could run a classic commodity scam using the service.

Over 11.6 million crypto projects failed in 2025, according to CoinGecko. Solidus Labs research found that roughly 98.6% of tokens on pump.fun exhibited rug-pull or pump and dump behavior. 

AI supercharges all of this. Scammers use it to generate polished websites, fake community engagement, and bot-driven hype that makes a worthless token look legitimate, including thousands of positive comments from accounts that did not exist yesterday and promo videos featuring people who are not real. A kid scrolling Discord or TikTok cannot tell genuine excitement from a manufactured pump.

Your kid does not need to be targeted by a scammer to lose money here. They can do it on their own by chasing a meme coin that looked exciting on TikTok and was worthless two days later. 

What parents can do

  • Teach your teen that cryptocurrency is not a side hustle or a get-rich opportunity. It is speculation, and most people who chase meme coins lose money.
  • If your child mentions a specific token, coin, or “opportunity” they saw online, slow it down, look it up together, and ask who came with it and why.
  • Watch for anyone coaching your child to use your credit card or payment account to buy crypto.

The bottom line

These threats are evolving fast. AI has made them more convincing and harder to detect. Parental controls cannot filter a deepfake shared in a group chat or stop your teenager from buying a meme coin on their phone.

You are the child's best security feature. Device security and parental controls help, but they do not replace parenting. If you stay in the loop, your kid will show you the threatening DM instead of hiding it. They will ask about the crypto opportunity before spending money. They will tell you about the deepfake at school instead of suffering in silence. 

Your job is to make sure your child knows that no matter what happens, no matter how embarrassed or scared they are, they can come to you for help. That message, delivered clearly and reinforced regularly, is still your powerful defense.

BrightCanary monitors everything your child types across all apps — including the messaging platforms and social media where sextortion attempts, money mule recruitment, and crypto scams begin. If a conversation raises red flags, you’ll see real-time alerts, AI-powered summaries, and emotional insights informed by APA guidelines. Download BrightCanary and start your free trial today.

Instagram logo iconFacebook logo icontiktok logo iconYouTube logo iconLinkedIn logo icon
Be the most informed parent in the room.
Sign up for digital parenting updates.
APA Labs Digital Badge
We've earned the APA Labs Digital Badge
BrightCanary is honored to earn this designation from the American Psychological Association (APA). This APA Labs Digital Badge reflects alignment with APA Labs criteria for scientific principles, safety, ethical use, and usability. It is not an endorsement nor guarantee of effectiveness. APA Labs does not independently test products.
Mom's Choice Awards Honoring Excellence
Excellence in online safety
The Mom’s Choice Awards evaluates products and services created for children, families, and educators. The program is globally recognized for establishing the benchmark of excellence in family-friendly media, products and services. BrightCanary was recognized for our excellence in family-friendly online safety.
@2026 Tacita, Inc. All Rights Reserved.