Photo of small boy using cell phone in the dark. Cute boy watching cartoons on smaart phone at night. Copy space.
When the Australian Government introduced its world-first social media age ban late last year, the intent was clear and, frankly, hard to argue with. Kids were spending too much time on platforms designed to keep them there, algorithms were serving them content nobody had approved, and parents felt powerless. Something needed to be done.
The problem is, what was done hasn’t worked.
EFTM surveyed hundreds of Australian parents this past week, and the results are about as damning as they get. Nearly two-thirds say the ban has been completely ineffective. Not partially. Not disappointing. Completely ineffective. A further quarter say their kids have simply found workarounds. When you add those two groups together, you have nine in ten parents telling you this policy has failed to achieve its purpose. Not a single respondent said it was working as intended. Not one.
So what went wrong?
Let’s start with the mechanism at the heart of the ban: age verification. The entire policy rests on the idea that platforms can reliably identify whether a user is under 16 and block them accordingly. The Government’s own studies into Age Assurance technologies showed clearly that identifying the ages of teens was problematic at best, and near impossible among the 13-16 year old age group. They should have seen this coming.
And the survey data backs that up. Of children who faced any verification attempt at all, the overwhelming majority passed without any help from a parent. A substantial proportion encountered no verification attempt whatsoever. Fewer than one in twenty-five were actually blocked.
Parents in our survey described their kids bypassing checks using older siblings’ faces, grandparents’ accounts, and in more than a few cases, just trying again and again until the system caved. One parent told us their son drew a handlebar moustache on his face and sailed through. Another said her kids used their 21-year-old brother’s face on camera. This is not a fringe experience. This is the norm.
It is worth being clear here: the platforms are not entirely to blame. While it is up to them to implement verification technology, if the technology does not reliably exist, you cannot legislate your way to a result. The Government handed the platforms an impossible brief and then stepped back. That is a policy failure, not entirely a platform failure.
Platforms aside, what about at home? Here the picture is equally patchy. Only around one in ten parents in our survey actively enforced the ban in their household. The majority said they were torn and did nothing either way. More than a quarter said they actively helped their child maintain access.
The reasons are understandable. Rural families told us social media is how their kids maintain friendships when driving to a mate’s place is an hour each way. Parents of kids approaching 16 said the disruption for a few months felt pointless. Others said all of their child’s friends were still online and enforcing the ban at home just left their kid isolated.
One parent summed it up plainly: “It was impossible to enforce because my grade 7 kid and I can say with 100% certainty that almost all of his friends are still on these platforms.”
That is the reality the policy ran into. A ban that almost nobody enforces is not a ban.
Here is what really concerns me. For a meaningful number of families, the ban has not made things safer. It has made them less safe.
Before the ban, many kids had authenticated accounts. Platforms knew their age, served age-appropriate content, and parents had visibility through family account settings and parental controls. The ban pushed a significant number of those kids off their supervised accounts and onto anonymous guest browsing. No profile. No age filter. No algorithm curation. No parental oversight. One parent put it starkly: “I now have less control over what my daughter can see. Before, the platform knew her age and content was filtered accordingly. Now it thinks she is over 16, and I have no controls at all.”
YouTube came up repeatedly and specifically. Families who had YouTube Premium with family accounts set up properly found their kids pushed into ad-heavy guest sessions with no content controls. Some parents described their children now watching “very random stuff” because their personalised, age-appropriate feed no longer exists. That is not a win.
And then there is the displacement problem. Among kids who did come off social media, the most commonly reported change in our survey was not less screen time. It was more screen time on other apps and games. The next most common? Increased use of messaging apps like WhatsApp and iMessage. Several parents, including teachers, flagged that bullying and harassment that used to happen on social platforms has simply migrated to group chats and gaming platforms that sit entirely outside the ban’s reach and, critically, outside parents’ view.
None of this means the ban was a bad idea. The intent was sound and most parents in our survey acknowledged that. Several, particularly those with younger children or those working in schools, said the legislation had a quiet benefit: it gave parents an external authority to point to. “The government says no” is an easier conversation than “because I said so.” Some school communities reported less peer pressure on younger kids to join platforms in the first place.
But good intent and poor execution are not the same thing, and right now we have the latter.
Parents need to be given real tools to monitor and control their kids’ social media access. Those tools exist. Content rating systems, algorithmic restrictions tied to age, mandatory parental controls as a platform standard, not an optional extra. These are achievable. What is not achievable, as we have now seen, is a blunt ban that hands the hard work to technology that was never going to deliver.
The Government has a genuine opportunity here. This was a world-first piece of legislation. It showed ambition and intent. But the data from our survey is clear: as it stands, it is not working.
The saddest finding in everything we collected is not the bypass rates or the verification failures. It is the parents who told us they had relaxed because they assumed the government had taken care of it. That false sense of security is probably the ban’s most damaging legacy.
Fix the execution. The intent was right.
Time to look at the way forward, to ensure algorithms are not part of kids online experiences, remove harmful content from the eyes of kids, and give parents control over their children’s online experiences.
Survey conducted by EFTM.com in April 2026 among 426 Australian parents. 304 respondents had a child under 16 actively using social media before the ban.
Trev is a Technology Commentator, Dad, Speaker and Rev Head.
He produces and hosts several popular podcasts, EFTM, Two Blokes Talking Tech, Two Blokes Talking Electric Cars, The Best Movies You’ve Never Seen, and the Private Feed. He is the resident tech expert for Triple M on radio across Australia, and is the resident Tech Expert on Channel 9’s Today Show and appears regularly on 9 News, A Current Affair and Sky News Early Edition.
Father of three, he is often found in his Man Cave.
Google and Apple announced their 2026 budget phones last month, with the Pixel 10a launching…
If I had a dollar for every time someone had asked me where to get…
A record number of Electric Cars were sold in March 2026 but is it because…
A giant Anaconda stalking the Amazon, while a film crew set out to make an…
Razer has announced its new gaming earbuds, the Hammerhead V3 HyperSpeed, designed for high-end audio…
We all know that the month of March was a record for Electric Vehicle sales…