Australia sets precedent with under-16 social media ban, big tech scrambling
The policy is rooted in evidence and allegations that social media can negatively affect teenagers’ wellbeing
Australia will implement a ban on social media accounts for users under 16 starting 10 December, requiring technology companies to take "reasonable steps" to prevent underage registration.
The law, the first of its kind to deny parental approval as a workaround, reflects growing concerns over the mental health and safety of young social media users, says the BBC.
Communications Minister Anika Wells said tech companies had been given "15 to 20 years" to address harms voluntarily, but "it's not enough". The measure aligns with global scrutiny of large platforms over the design and impact of their products on minors.
Health and safety concerns
The policy is rooted in evidence and allegations that social media can negatively affect teenagers' wellbeing. Multiple lawsuits claim that platforms including Meta, TikTok, Snapchat, and YouTube deliberately designed apps to be addictive, knowing they could harm young users.
Critics point to features such as Instagram's face-altering beauty filters, which experts link to body dysmorphia and eating disorders, as examples of potentially harmful content.
Former employees have testified about the risks associated with platform design, and some whistleblowers allege executives actively blocked proposals to reduce exposure to damaging features.
Studies suggest that young users can experience increased anxiety, poor self-esteem, and compulsive use patterns linked to platform algorithms that prioritize engagement over wellbeing.
Beyond mental health, authorities are concerned about exposure to sexual exploitation and harmful content. Allegations include platforms' failure to remove predatory accounts or misleading material, raising the need for regulatory oversight.
Industry response
Tech companies have strongly opposed the Australian rules, describing the law as "blanket censorship" and warning that restricting access could make children less safe.
Executives argue that parents, rather than governments, should decide when teens can access social media, and they have questioned whether age verification technology is reliable.
In practice, firms have launched initiatives aimed at mitigating risk, although critics remain skeptical. YouTube has deployed AI systems to estimate user age and limit access to harmful content.
Snapchat introduced accounts for users aged 13 to 17 with stricter privacy defaults. Meta rolled out Instagram Teen accounts, restricting explicit content and limiting interactions with unknown users.
A study led by former Meta employee Arturo Béjar found that nearly two-thirds of the safety tools on Instagram Teen accounts were ineffective. Observers argue that many of these measures are designed more to create the appearance of safety than to substantially reduce harm.
Tech companies have also sought to shift responsibility for compliance. Both Meta and Snap have suggested that app store operators, including Apple and Google, should handle age verification. Privately, industry leaders have lobbied the Australian government and met with officials to influence policy design.
Concealment allegations
Court filings and whistleblower testimony have added to scrutiny of platform practices. Lawsuits allege that executives actively concealed the risks of social media, including addictive features and content harmful to young users.
One allegation holds that Meta's founder, Mark Zuckerberg, vetoed proposals to remove filters that could worsen body image issues among teens. Former employees have testified that safety concerns were often overridden in favor of engagement metrics.
These allegations underscore the rationale behind Australia's strict approach, which aims to compel companies to take direct responsibility for protecting minors rather than relying on voluntary self-regulation.
Global implications
Australia's ban is being closely watched internationally. Wells said officials in the European Union, Fiji, Greece, Malta, Denmark, and Norway have requested guidance on similar measures, while Singapore and Brazil are monitoring the rollout.
Analysts suggest that the law could serve as a "proof of concept" for other countries considering stricter regulation, though tech firms are incentivized to implement it cautiously to avoid encouraging copycat policies.
The law carries maximum penalties of A$49.5 million for serious breaches, which analysts suggest may be insufficient to change behavior significantly but could pressure companies to demonstrate at least partial compliance.
Former Facebook Australia chief Stephen Scheeler described the regulation as a "seatbelt moment" for the industry, arguing that imperfect regulation is preferable to none.
Looking ahead
Australia's policy highlights a growing tension between protecting young users and preserving industry autonomy. While companies continue to develop safety features and adjust privacy settings, critics argue that these measures fall short of addressing fundamental risks and that prior concealment of harmful effects undermines trust.
By legally restricting access for children under 16, Australia aims to set a precedent for more rigorous oversight of social media, signaling a willingness to prioritize youth safety over corporate convenience. How the law is enforced and whether it influences global regulation will be closely watched in the coming months.
