Australia’s online watchdog has accused the world’s largest social media companies of failing to properly enforce the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its first compliance report since the ban took effect, the regulator identified multiple shortcomings and has now shifted from observation to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Compliance Failures Revealed in Initial Significant Review
Australia’s eSafety Commissioner has detailed a worrying pattern of failure to comply among the world’s most prominent social media platforms in her first formal review following the ban came into effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement appropriate safeguards to prevent minors from using their services. Julie Inman Grant expressed particular concern about structural gaps in age verification systems, noting that some platforms have allowed children who initially declared themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings represent a notable intensification in the regulatory response, with the eSafety Commissioner transitioning from monitoring towards active enforcement. The regulator has stressed that merely demonstrating some children still maintain accounts is insufficient; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the first place. This shift reflects the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that do not meet the statutory obligations.
- Permitting previously banned users to re-verify their age and regain account access
- Allowing multiple tries at the identical verification process without consequences
- Weak systems to prevent new under-16 accounts from being opened
- Inadequate reporting tools for families and the wider community
- Absence of transparent data about enforcement efforts and user account terminations
The Scope of the Challenge
The substantial scale of social media activity amongst young Australians underscores the compliance challenge facing both the authorities and the platforms themselves. With numerous accounts already restricted or removed since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have proven far more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from false claims. This complexity has placed enforcement authorities grappling with the core issue of whether current age verification technologies are adequate to the task.
Beyond the operational challenges lies a broader concern about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems mandated legally. The move to active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they stand to incur substantial fines that could reshape their business models in Australia and potentially influence compliance frameworks internationally.
What the Figures Indicate
In the opening month following the ban’s introduction, Australian officials stated that 4.7 million accounts had been suspended or removed. Whilst this number initially seemed to prove regulatory success, subsequent analysis reveals a more nuanced picture. The considerable quantity of account removals indicates that many under-16s had been able to set up accounts in the initial stages, demonstrating that preventive controls were inadequate. Moreover, the data casts doubt about whether deleted profiles reflect authentic compliance or simply users removing their profiles voluntarily in in light of the new restrictions.
The minimal transparency surrounding these figures has troubled independent observers attempting to evaluate the ban’s true effectiveness. Platforms have disclosed minimal information about their implementation approaches, performance indicators, or the profile of removed accounts. This lack of clarity makes it challenging for regulators and the public to determine whether the ban is operating as planned or whether teenagers are simply finding alternative ways to access social media. The Commissioner’s push for comprehensive proof of structured adherence protocols reflects mounting dissatisfaction with platforms’ resistance to disclosing full information.
Sector Reaction and Opposition
The social media giants have addressed the regulator’s enforcement action with a combination of compliance assurances and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination remains a major challenge across the industry. The company has advocated for a different approach, proposing that strong age verification systems and parental consent requirements implemented at the app store level would be more efficient than platform-level enforcement. This stance reflects wider concerns across the industry that the current regulatory framework puts an impractical burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, announcing that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, pointing to privacy concerns and technical limitations, creating a standoff between regulators and platforms over who carries responsibility for implementation.
- Meta argues age verification ought to take place at app store level instead of on individual platforms
- Snap asserts to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups highlight privacy issues and technical obstacles as barriers to effective age verification
- Platforms contend they are making their best effort whilst challenging the ban’s general effectiveness
More Extensive Inquiries Concerning the Ban’s Effectiveness
As Australia’s under-16 online platform ban enters its enforcement phase, key concerns persist about whether the law will accomplish its intended goals or merely push young users towards unregulated platforms. The regulator’s first compliance report reveals that following implementation, significant loopholes exist—children keep discovering ways to circumvent age verification systems, and platforms have had difficulty prevent new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory vigilance but on whether young people will truly leave major social networks or simply shift towards alternative services, encrypted messaging applications, or virtual private networks designed to conceal their age and location.
The ban’s international ramifications increase the complexity of assessments of its effectiveness. Countries such as the United Kingdom, Canada, and multiple European countries are observing Australia’s approach closely, exploring similar legislation for their own citizens. If the ban does not successfully reduce children’s digital engagement or cannot protect them from dangerous online content, it could undermine the case for similar measures elsewhere. Conversely, if implementation proves sufficiently strict to truly restrict underage access, it may embolden other nations to adopt comparable measures. The conclusion will potentially determine international regulatory direction for years to come, making Australia’s implementation efforts examined far beyond its borders.
Who Benefits and Who Is Disadvantaged
Mental health advocates and organisations focused on child safety have backed the ban as a necessary intervention to counter algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms built to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates valid applications of social media for young people—maintaining friendships, obtaining educational material, and engaging with online communities around shared interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families challenge.
The ban’s concrete implications extends beyond individual users to affect content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously utilised effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to develop age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects extend far beyond the simple goal of child protection.
What Lies Ahead for Enforcement
Australia’s eSafety Commissioner has signalled a notable transition from hands-off observation to active enforcement, marking a critical turning point in the implementation of the under-16 ban. The authority will now gather evidence to establish whether services have failed to take “reasonable steps” to restrict child participation, a legal standard that extends beyond simply recording that young people stay within these services. This method demands concrete evidence that organisations have implemented suitable mechanisms and procedures intended to prevent minors. The Commissioner’s office has signalled it will pursue investigations systematically, building cases that could trigger substantial penalties for non-compliance. This move from oversight to enforcement reveals growing frustration with the companies’ present approach and signals that voluntary cooperation on its own will not be enough.
The enforcement phase highlights critical issues about the adequacy of penalties and the practical mechanisms for ensuring platform accountability. Australia’s legislation delivers enforcement instruments, but their success relies on the eSafety Commissioner’s willingness to pursue official proceedings and the platforms’ ability to adapt substantively. Overseas authorities, notably regulators in the United Kingdom and European Union, will carefully track Australia’s enforcement strategy and outcomes. A robust enforcement effort could set a model for further jurisdictions contemplating comparable restrictions, whilst failure might weaken the comprehensive regulatory system. The coming months will be critical whether Australia’s groundbreaking legislation delivers substantive defence for teenagers or stays primarily ceremonial in its impact.
