Meta Introduces Smarter AI to Detect Underage Users
Last update: May 14, 2026
Disclaimer: This website may contain affiliate links, which means we may earn a commission if you click on the link and make a purchase. We only recommend products or services that we personally use and believe will add value to our readers. Your support is appreciated!

Meta is stepping up its fight to keep teens safe online, introducing smarter AI tools that can better detect underage users and automatically place them into safer, age-appropriate experiences across its platforms.
According to cbinews.tv, Meta is expanding its use of artificial intelligence to strengthen how it verifies age and protects teenagers across Facebook, Instagram, and Messenger.
The company says it is combining AI systems, product design updates, and parental support tools to build safer online spaces where young users are better protected by default.
Smarter AI to catch underage accounts
Meta has a strict rule that users must be at least 13 years old, and it’s now leaning more heavily on AI to enforce that rule at scale.
The platform is upgrading its systems to better spot accounts that may belong to younger users. These improvements include:
Reading context from profiles: AI now looks at signals like posts, captions, bios, and comments to detect clues that suggest a user might be underage—such as references to school life or age milestones.
Understanding visual cues: New tools can estimate age ranges from photos and videos without using facial recognition or identifying individuals. Instead, it relies on general visual patterns combined with other signals.
Stronger enforcement steps: If an account is flagged as potentially underage, Meta may require age verification—and in some cases, remove the account if the age can’t be confirmed.
Better reporting tools: Users can now more easily report suspected underage accounts through simplified in-app options and the Help Centre.
AI-assisted moderation: Human review teams are now being supported by AI systems that help speed up and standardise decisions.
Blocking repeat violations: Meta is also improving its ability to stop users who try to get around the rules by creating new accounts.
Some of these tools are already active globally, while others are still being rolled out gradually across different regions.
Expanding Teen Account protections
Meta is also scaling its “Teen Account” system, which automatically applies stricter safety settings for users under 18.
These protections limit unwanted contact, reduce exposure to sensitive content, and apply default safety settings designed for younger audiences. Meta says hundreds of millions of teens are already covered under this framework.
In addition, the company is improving its systems to detect teens who may have entered false adult birthdays and automatically move them into safer, age-appropriate settings.
Helping parents stay involved
Beyond technology, Meta is also trying to bring parents into the conversation.
New notifications and guidance tools are being introduced to help parents understand how age verification works and encourage honest conversations with teens about online safety.
These updates build on Meta’s existing Family Centre resources, which provide tools and educational support for families managing digital habits.
Industry-wide push for better age checks
Meta also believes that online safety shouldn’t rest on individual apps alone. Instead, it is advocating for age verification systems at the operating system or app store level.
According to the company, this approach would make protections more consistent across apps while also improving privacy and reducing the need for repeated checks on every platform.
Meta adds that it also uses a mix of behavioural signals, AI-based estimation, and user reports to identify cases where users may be misrepresenting their age.
#Meta #OnlineSafety #TeenSafety #AI #ArtificialIntelligence #Instagram #Facebook #DigitalSafety #TechNews #SocialMediaSafety #cbinewsTV

