Australia begins deactivating under 16 accounts as ban nears rollout
Australia moved on December 5 to deactivate or freeze social media accounts registered to users under 16 as a world first ban prepares to take effect. The measure forces major platforms to act quickly, raises questions about verification and privacy, and sets a precedent other governments are watching.

Major social media companies began deactivating or freezing accounts they believed were registered to users under 16 on December 5, following a decision by Australia’s eSafety regulator and the federal government to implement a law barring people under 16 from holding accounts on leading platforms. The prohibition takes effect on December 10, 2025, and carries penalties for noncompliance that can reach into the tens of millions of Australian dollars.
Platforms that have communicated steps include Meta’s Facebook, Instagram and Threads, TikTok, Snapchat and YouTube, which the government urged to follow the new legal framework. In several cases the companies notified account holders and their parents or guardians that affected accounts would be frozen and that users should download any data they wanted to keep before access was restricted.
The law is a marked attempt to limit minors’ exposure to harms associated with social media while creating a statutory obligation for platforms to enforce age limits. The eSafety Commissioner, Julie Inman Grant, framed the measure as a potential global precedent, and officials have signaled that Canberra expects vigorous enforcement where platforms fail to comply.
Implementation has prompted a fast moving debate among technology companies, child safety advocates and privacy groups about how to verify age and enforce the rule without creating new harms. Platforms have adopted different short term approaches, and advocates have warned that heavy handed verification requirements could push children to unsafe alternatives or require minors to surrender sensitive identity information. Civil society groups have also urged transparent rules on how accounts are deactivated and whether data will be permanently deleted or retained by platforms.
Experts say the practical challenge is proving age in an online environment designed around anonymous or pseudonymous interactions. Governments and companies have discussed methods ranging from parental consent systems to identity checks or algorithmic signals, but no single approach commands consensus. Lawmakers in other countries are watching closely, with some regulators considering whether a similar model might be feasible or desirable in their jurisdictions.

For platforms the new Australian law forces a calculus between compliance risk and user base impact. Fines large enough to reach into the tens of millions of dollars raise the stakes for multinational companies that rely on scale, and could change how global platforms design onboarding and verification features. For families, the move will alter how teenagers access social media services and how parents must manage online accounts for younger children.
The rollout also raises legal and ethical questions about digital exclusion, the protection of vulnerable young people who rely on online communities for support, and the data privacy implications of age verification processes. Observers expect litigation and political pushback in Australia and beyond as stakeholders test the boundaries of the law and seek clearer guidance on enforcement standards.
As the December 10 start date approaches, the Australian experiment will provide an early test case for whether statutory age limits can be implemented at scale and whether they achieve the stated goal of reducing harm without producing adverse side effects. Governments around the world are likely to interpret the outcome as they consider their own rules for the online lives of children.


