TikTok to deploy automated age‑detection across Europe amid regulatory push
TikTok will roll out a Europe-specific age-detection system in the coming weeks to identify underage accounts and meet tightening regional rules. The move raises privacy and accuracy questions.

TikTok will begin deploying a new age-detection system across Europe in the coming weeks after a year-long pilot, the company said, as regulators press platforms to do more to keep children off social networks. The technology, built specifically for the European market, will flag accounts it predicts may belong to users under the platform's minimum age and refer those cases to human moderators for review.
The system combines multiple signals — profile information, posted videos and behavioural patterns — to estimate whether an account may be operated by someone under 13. TikTok emphasises that accounts flagged by the algorithm will not be automatically banned; instead, specialist moderators will review flagged profiles and determine any subsequent action, including potential removal.
For users who contest age-related actions, the platform will offer appeals using third-party verification tools. Those tools include facial-age estimation supplied by verification firm Yoti, as well as other checks such as credit-card confirmations and government-issued identification. TikTok said the features were developed in consultation with European regulators and designed to meet data-protection standards such as the General Data Protection Regulation.
The rollout is billed as a Europe-wide initiative and will encompass the European Economic Area, plus the United Kingdom and Switzerland, while TikTok says it will notify European users as the system becomes active. The company framed the approach as part of a multi-layered age-assurance strategy and acknowledged there is no single globally agreed method to confirm ages while preserving privacy.
Regulators across Europe have stepped up scrutiny of age verification. The European Parliament has pressed for clearer age limits on social platforms, and several countries are considering or enacting stricter measures. Denmark has proposed banning social media for those under 15, and a UK pilot of earlier TikTok measures reportedly led to the removal of thousands of accounts believed to belong to children under 13. Australia last year imposed what its government described as the world’s first social-media ban for children under 16, underscoring global momentum to tighten access.
The technical approach raises immediate questions for privacy and child-safety advocates. Facial-age estimation and behavioural profiling can boost detection rates, but both techniques are imperfect and can reflect biases in data and algorithms. False positives risk unjust account suspensions or burdensome verification steps for legitimate teens; false negatives leave younger children exposed to content and contacts not intended for them. Reliance on third-party verifiers also raises concerns about how sensitive verification data will be stored, used and shared under GDPR.
TikTok's decision to route flagged cases to human moderators addresses some risk of algorithmic error but leaves open how consistently reviews will be applied and how large the moderation workforce will need to be. Rights groups and privacy regulators have called for independent audits and transparency on accuracy, error rates and data-retention policies for verification processes.
As the rollout begins, European data-protection authorities, led by Ireland’s regulator, will be watching to ensure compliance with privacy rules. For millions of young users and their families, the changes could mean stronger protections — or new barriers and privacy trade-offs — depending on how accurately and fairly the system operates in practice.
Know something we missed? Have a correction or additional information?
Submit a Tip

