Australia’s internet regulator has criticised the world’s largest social media companies of failing to properly enforce the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to stop new account creation. In its first compliance report since the ban took effect, the regulator identified multiple shortcomings and has now shifted from observation to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Revealed in Initial Significant Review
Australia’s eSafety Commissioner has documented a concerning pattern of non-compliance amongst the world’s biggest social media platforms in her first formal review since the ban took effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement adequate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about structural gaps in age verification systems, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings indicate a significant escalation in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards active enforcement. The regulator has stressed that merely demonstrating some children still hold accounts is inadequate; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the outset. This shift reflects the government’s determination to hold tech giants responsible, with potential penalties looming for companies that do not meet the legal requirements.
- Permitting formerly prohibited users to re-verify their age and regain account access
- Allowing repeated attempts at the identical verification process without penalty
- Insufficient mechanisms to prevent new under-16 accounts from being established
- Insufficient reporting tools for parents and the general public
- Shortage of transparent data about regulatory measures and account deletions
The Scope of the Issue
The considerable scale of social media usage amongst young Australians underscores the regulatory challenge facing both the government and the platforms themselves. With numerous accounts already restricted or removed since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to implementing age restrictions have proven far more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from fraudulent ones. This intricacy has left enforcement authorities wrestling with the fundamental question of whether current age verification technologies are adequate to the task.
Beyond the operational challenges lies a broader concern about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted strict identity verification requirements, citing privacy concerns and the real challenge of verifying age digitally. However, the Commissioner’s report suggests that some platforms might not be demonstrating adequate commitment to implement the systems mandated legally. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their compliance infrastructure, or they stand to incur substantial fines that could transform their operations in Australia and possibly affect regulatory approaches internationally.
What the Figures Indicate
In the initial month subsequent to the ban’s launch, Australian authorities indicated that 4.7 million accounts had been limited or deleted. Whilst this figure initially looked to demonstrate regulatory success, further investigation reveals a more layered picture. The considerable quantity of account deletions suggests that many under-16s had managed to establish accounts in the first place, revealing that protective safeguards were inadequate. Additionally, the data casts doubt about whether suspended accounts reflect real regulation or merely users closing their accounts willingly in response to the latest limitations.
The restricted transparency concerning these figures has disappointed independent observers attempting to evaluate the ban’s actual effectiveness. Platforms have revealed scant details about their enforcement methodologies, effectiveness metrics, or the profile of deleted profiles. This absence of transparency makes it difficult for regulators and the public to evaluate whether the ban is operating as planned or whether teenagers are just locating different means to use social media. The Commissioner’s insistence on detailed evidence of systematic compliance measures reflects increasing concern with platforms’ reluctance to provide full information.
Sector Reaction and Pushback
The major tech platforms have responded to the regulatory enforcement measures with a combination of assurances of compliance and scepticism about the ban’s practicality. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination remains a major challenge across the industry. The company has advocated for a different approach, proposing that robust age verification and parental approval mechanisms implemented at the application store level would be more efficient than enforcement at the platform level. This stance demonstrates broader industry concerns that the current regulatory framework puts an impractical burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, stating that it had suspended 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, sector analysts question whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the statutory obligation to systematically remove an whole age group remains unresolved. Companies have consistently opposed stringent age verification, pointing to privacy concerns and technical limitations, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta maintains age verification should occur at app store level rather than on individual platforms
- Snap claims to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy issues and technical obstacles as barriers to effective age verification
- Platforms maintain they are doing their best whilst questioning the ban’s overall effectiveness
Wider Considerations Concerning the Prohibition’s Effectiveness
As Australia’s under-16 online platform ban enters its implementation stage, key concerns persist about whether the legislation will accomplish its stated objectives or merely drive young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that following implementation, substantial gaps remain—children continue finding ways to bypass age verification mechanisms, and platforms have struggled to prevent new underage accounts from being created. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will truly leave mainstream platforms or simply shift towards alternative services, secure messaging apps, or virtual private networks designed to mask their age and location.
The ban’s global implications increase the complexity of assessments of its success. Countries including the United Kingdom, Canada, and multiple European countries are observing Australia’s approach closely, considering similar legislation for their respective populations. If the ban does not successfully reduce children’s social media usage or does not protect them from damaging material, it could damage the case for similar measures elsewhere. Conversely, if regulation becomes sufficiently robust to effectively limit underage access, it may encourage other governments to implement similar strategies. The conclusion will probably shape international regulatory direction for many years ahead, making Australia’s enforcement efforts examined far beyond its borders.
Who Gains and Who Is Disadvantaged
Mental health campaigners and child safety organisations have endorsed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators contend that removing young Australians platforms built to maximise engagement could reduce anxiety, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes valid applications of social media for young people—maintaining friendships, accessing educational content, and participating in online communities around common interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families question.
The ban’s practical impact reaches past individual users to affect content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that rely on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously utilised effectively. Meanwhile, the ban unintentionally advantages large technology companies with resources to develop age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects go well past the simple goal of child protection.
What Happens Next for Regulatory Action
Australia’s eSafety Commissioner has indicated a notable transition from inactive oversight to proactive action, marking a pivotal moment in the implementation of the under-16 ban. The regulator will now gather evidence to determine whether services have failed to take “reasonable steps” to restrict child participation, a statutory benchmark that extends beyond simply documenting that minors continue using these services. This strategy requires demonstrable proof that organisations have introduced appropriate systems and protocols meant to keep out minors. The regulatory body has indicated it will pursue investigations carefully, developing arguments that could lead to significant fines for failure to comply. This transition from monitoring to enforcement demonstrates mounting concern with the platforms’ current efforts and suggests that voluntary cooperation on its own will not be enough.
The rollout phase highlights important questions about the adequacy of penalties and the concrete procedures for holding tech giants accountable. Australia’s legislation offers compliance mechanisms, but their efficacy hinges on the eSafety Commissioner’s readiness to undertake regulatory enforcement and the platforms’ capability to adjust effectively. Global regulators, particularly regulators in the United Kingdom and European Union, will keenly observe Australia’s regulatory approach and consequences. A robust enforcement effort could establish a model for additional countries evaluating comparable restrictions, whilst shortcomings might compromise the entire regulatory framework. The forthcoming period will determine whether Australia’s pioneering regulatory approach produces genuine protection for young people or becomes largely performative in its effect.
