Australia is set to enforce one of the world’s strictest social media age-limit regulations as it begins implementing a minimum age of 16 for users on major digital platforms starting December 10, 2025. Social media giants including Facebook, Instagram, TikTok, Snapchat, Reddit, YouTube, Threads, Kick, X, and the livestreaming platform Twitch must remove accounts belonging to Australian children under 16 or face penalties of up to AUD 50 million (USD 33 million).
The Australian eSafety Commissioner will issue notices to these platforms on December 11, requiring them to disclose the number of under-16 accounts they have closed. They will be mandated to submit monthly compliance reports for six months, enabling the government to monitor the system’s effectiveness and identify loopholes in age-verification processes.
Communications Minister Anika Wells, speaking at the National Press Club of Australia, stated that while the government acknowledges age-assurance technology may take several days or weeks, any systemic breaches will result in heavy financial penalties. The new regulation aims to significantly strengthen online child protection at a time when global scrutiny over social media safety practices is intensifying.
Platforms Begin Age Verification: Google & Meta Respond
Google announced that any account holder on YouTube detected to be under 16 will be automatically signed out from December 10. These users will lose access to features requiring a logged-in profile such as playlists and the ability to subscribe. Google will use personal data from associated Google accounts and other internal age signals to determine the user’s age.
Despite compliance, Google criticised the legislation, calling it “rushed” and stating that it fails to understand how young Australians use digital platforms.
Meta, which owns Facebook, Instagram, and Threads, has also outlined its age enforcement plan. From Thursday, Meta will begin removing accounts suspected to belong to under-16 users. To rectify false positives, individuals aged 16 and above can verify their age through Yoti Age Verification, using either government-issued identity documents for digital authentication.
Australia’s Enforcement Plan and Penalties
Under the new law:
- Platforms must begin removing under-16 accounts from December 10.
- The eSafety Commissioner will issue initial information notices on December 11.
- Monthly compliance reports will be required for six months.
- A court may apply fines up to AUD 50 million for repeated or systemic violations.
Wells reiterated that the law responds to longstanding concerns from Australian parents, emphasising the government’s commitment to prioritising child safety online.
Global Momentum for Minimum Social Media Age Laws
Australia is not alone in pushing for strict age limits.
- Malaysia recently announced a ban on accounts for children under 16 beginning in 2026.
- The European Commission, as well as France, Denmark, Greece, Romania, and New Zealand, are exploring or drafting similar policies.
This signals a growing international shift toward age-verification-based digital safety frameworks.
Legal Challenge Underway
The Digital Freedom Project, a digital rights group based in Sydney, is seeking a High Court injunction to block enforcement of the law. As of Wednesday, the hearing date has not been finalised. The group argues that the legislation could negatively impact privacy, free speech, and digital autonomy for young Australians.
Quick Reference Summary
| Key Area | Details |
| Minimum Age for Social Media in Australia | 16 years |
| Law Enforcement Begins | December 10, 2025 |
| Platforms Covered | Facebook, Instagram, TikTok, YouTube, X, Reddit, Snapchat, Threads, Kick, Twitch |
| Reporting Requirement | Monthly under-16 account removal data for 6 months |
| Maximum Fine | AUD 50 million |
| Key Measures by Platforms | Google & Meta begin age verification and automatic sign-outs |
| Legal Challenge | Filed by Digital Freedom Project |
Australia’s decision to enforce a strict 16+ age limit on social media marks a significant shift in global digital safety regulation. With mandatory reporting, age-verification expectations, and large financial penalties, platforms must now demonstrate proactive compliance. As Google and Meta begin implementing verification systems, and with other nations exploring similar laws, the international landscape of online child protection is evolving rapidly. For further updates, visit official Australian government releases and bookmark this page for continued coverage.




