A.
BrightCanary uses advanced machine learning systems to evaluate text, images, and videos your children are seeing and posting. We can alert you about:
Nudity and sexual references
Weapons
Alcohol and Drug references
Offensive/hate signs and gestures
References to extremist ideologies
Graphic violence, blood, wounds, gore
Profanity
Discriminatory insults
You can control which type of content you see more or fewer alerts about by going to Settings → Manage Family → Adjust
Our alerts system is still a work in progress so you are likely to get some false positives (or occasionally, false negatives), but we’re continuously working to improve this. We’d love to hear from you about how it’s working at feedback@brightcanary.io.