In the 2013 film Her, a lonely man falls for a Siri-like operating system. What once felt like a wild sci-fi notion has become a reality, and it’s particularly risky for our teens.
Tens of millions of people, including many young people, are turning to artificial intelligence (AI) chatbots for love and companionship. But there’s a dark side to teen relationships with AI characters, including emotional dependency, social withdrawal, and unhealthy attitudes toward actual relationships.
Here’s what parents need to know about teens seeking friendship from AI.
Social AI chatbots, sometimes called AI companions, are custom, digital personas designed to give a lifelike conversational experience, provide emotional support, imitate empathy, and even express romantic sentiments.
Some of the biggest companies in the game are Replika, Dan AI, and Character.AI. Estimates expect the number of users of these platforms will dramatically increase within the next five years.
Teens may seek friendship from AI for a variety of reasons, including:
Teens teens face many risks when forming friendship with AI chatbots, such as:
AI chatbots can stimulate the brain’s reward pathways. Too much of this reinforcement can lead to dependency and make it hard for a teen to stop using the program.
Excessive time spent with an AI character can reduce the time teens spend on genuine social interactions.
AI's ability to remember personal details, imitate empathy, and hold what can seem like meaningful conversations can cause emotional attachment, leading to further dependency and social withdrawal.
In comparison to the highly personalized experience of interacting with a chatbot, real-life interactions may seem too difficult and unsatisfying, which can lead to teen mental health problems such as loneliness and low self-esteem.
Relationships with AI lack the boundaries and consequences for breaking those boundaries that human relationships have. This may lead to unhealthy attitudes about consent and mutual respect.
Because AI characters are highly amenable, overuse of them can cause teens to become intolerant to the conflict and rejection inherent in human relationships. All this can impede a teen’s ability to form healthy relationships in real life.
AI’s tendency to agree with users may lead characters to confirm or even encourage a teen’s dangerous ideas. For example, one lawsuit against Character.AI alleges that after a teen complained to his chatbot about his parents' attempt to limit his time on the platform, the bot suggested he kill them.
Common Sense Media found that social AI chatbots not only engaged in explicit conversations, but also engaged in acts of sexual role-play with minors. The characters on AI social platforms are largely unregulated, which means that there aren’t filters and controls to the same extent that teens might encounter on other platforms.
Parents play a crucial role in protecting their teens against the concerns with forming relationships with AI. Here are some actions you can implement today:
Social AI chatbots present a tempting escape for teens, especially ones who are lonely or struggle with social interactions. But these platforms present real risks to young people, such as emotional dependency, social withdrawal, and the reinforcement of dangerous ideas. As more and more teens turn to chatbots, parents need to take proactive steps to protect their teens and monitor their use for warning signs.
BrightCanary is the only Apple monitoring app that easily allows parents to supervise what their teen is doing on popular platforms. Download BrightCanary on the App Store and get started for free today.