We’re flying as fast as time travels around the clock toward the ban on social media for kids under 16 and there are still a lot – a real lot – of unanswered questions.
The most important one – will YouTube be included or excluded from the ban.
In short, the draft plan has YouTube exempt because of it’s educational content. My own analysis of YouTube Shorts (what we should really be talking about) vs TikTok painted YouTube in a very poor light. And there’s plenty that agree.
So, short of an actual decision by the Prime Minister, I asked our favourite Q&A master – AI.
And for this, given the nature of the topic, I opted for Google’s own “Gemini AI”.
I won’t sugar coat this or even try to summarise it, I’ll show you the entire transcript. Scroll to the end to see what happens when I pressure it to make a decision – Hint: Not good for YouTube.
The Australian Federal Government is moving forward with landmark legislation to ban children under the age of 16 from accessing social media platforms, with the new rules expected to be in full effect by December 2025. The Online Safety Amendment (Social Media Minimum Age) Act 2024 has been passed, marking a significant step in the government’s efforts to protect young people from the potential harms of social media.
Under the new law, social media companies will be required to take “reasonable steps” to prevent individuals under 16 from creating or maintaining accounts on their platforms. The onus will be on the companies to implement effective age verification processes.
Key Provisions of the Ban:
- Age Limit: The legislation sets a minimum age of 16 for using social media services in Australia.
- “Reasonable Steps” for Platforms: Social media companies must implement measures to verify the age of their users. The specific requirements for what constitutes “reasonable steps” will be outlined in guidelines to be developed by the eSafety Commissioner.
- Significant Penalties: Companies that fail to comply with the new regulations face substantial fines.
- No Penalties for Users: The legislation is targeted at social media platforms, and there will be no penalties for children who manage to bypass the age gate or for their parents.
- Exemptions: The ban will not apply to all online services. Messaging apps, online gaming platforms, and services with a primary focus on education or health are expected to be exempt.
Why is the Government Implementing this Ban?
The government has cited growing concerns about the negative impacts of social media on the mental health and well-being of young people as the primary driver for the new law. These concerns include exposure to harmful content, cyberbullying, and the addictive nature of social media platforms. Proponents of the ban argue that it will help to create a safer online environment for children and encourage them to spend more time on other activities.
How Will Age Verification Work?
A crucial element of the plan is the implementation of effective age assurance technology. The Australian government has commissioned a trial to assess various age verification methods, including facial recognition and other technological solutions. Preliminary findings from this trial have suggested that implementing robust age verification is technically feasible.
However, the exact methods that social media companies will be required to use have not yet been finalized. The eSafety Commissioner is currently in the process of consulting with industry, experts, and the public to develop comprehensive guidelines.
Public and Expert Reaction:
The proposed ban has generated significant public and expert debate. Supporters, including many parents and child safety advocates, have welcomed the move as a necessary step to protect children.
However, there are also concerns about the practicalities of enforcement, the potential for children to find ways around the restrictions (such as using VPNs), and the potential impact on young people’s ability to connect and access information online. Some civil liberties groups have also raised privacy concerns related to the collection of data for age verification purposes.
What Happens Next?
The social media industry has been given a transition period to prepare for the new requirements. The eSafety Commissioner will continue its consultation process to finalize the guidelines for “reasonable steps” for age verification. The full ban is expected to be implemented by December 2025.
This move by the Australian government is being closely watched internationally as other countries grapple with similar concerns about the impact of social media on young people. The effectiveness and real-world impact of this pioneering legislation will become clearer once it is fully in force
When it comes to the Australian Federal Government’s plan to ban social media for children under 16, the definition of “social media” is not a broad, all-encompassing term. Instead, it is specifically defined within the legal framework of the Online Safety Act 2021 and its recent amendment, the Online Safety Amendment (Social Media Minimum Age) Act 2024.
The Legal Definition of a Social Media Service
Under the Online Safety Act, a “social media service” is defined as an electronic service that meets the following conditions:
- The sole or primary purpose of the service is to enable online social interaction between two or more end-users.
- The service allows end-users to link to, or interact with, some or all of the other end-users.
- The service allows end-users to post material on the service.
This definition is designed to capture platforms where the main function is to facilitate social connections and user-generated content.
“Age-Restricted Social Media Platform”: The Focus of the Ban
The ban specifically targets what the legislation calls an “age-restricted social media platform.” This is a subset of social media services and is defined as a platform where:
- The sole or a significant purpose of the service is to enable online social interaction between two or more end-users.
- The service allows end-users to link to, or interact with, some or all of the other end-users.
- The service allows end-users to post material on the service.
The key phrase here is “significant purpose,” which broadens the scope to include services where social interaction is a major feature, even if not the only one.
What Platforms Are Likely to Be Included?
While a definitive and exhaustive list has not been published, based on the government’s announcements and the legislative definitions, the ban is expected to apply to major platforms commonly understood as social media, such as:
- TikTok
- Snapchat
- X (formerly Twitter)
The eSafety Commissioner, who is responsible for enforcing these regulations, will provide further clarity and guidelines for what constitutes an “age-restricted social media platform.”
What Services Are Likely to Be Exempt?
The government has indicated that not all online services with social features will be subject to the ban. The following categories of services are expected to be exempt:
- Messaging apps: Services primarily used for private communication, such as WhatsApp and Telegram.
- Online gaming services: Platforms where the primary purpose is gaming, although many have chat and social features.
- Educational and health services: Platforms with a primary focus on providing educational or health-related content and support.
- YouTube: This platform is notable as the government has indicated it will likely be exempt, primarily because users can access a vast amount of content without needing to create an account.
The Minister for Communications has the power to make rules that specify which services are or are not considered “age-restricted social media platforms,” allowing for flexibility as technology evolves. The final list of included and exempt services will be clarified as the implementation date for the ban approaches.
Yes, that’s correct. The statement that YouTube contains “Widespread Harmful Content” is based on extensive research and data, particularly from Australia’s lead online safety regulator, the eSafety Commissioner.
It’s important to clarify that this does not mean most content on YouTube is harmful. The platform is a vast resource for education and entertainment. However, the term “widespread” is used because the volume of harmful material is significant enough that a large number of Australian children and teens are regularly exposed to it.
Here is a breakdown of what authorities and researchers mean by “harmful content,” with supporting evidence from Australian sources:
Types of Widespread Harmful Content
Research from the eSafety Commissioner and other Australian bodies shows that young people are frequently encountering several categories of harmful content:
1. Content with Violence and Graphic Themes: This is one of the most common types of harmful content encountered. It includes real-world violence, graphic news coverage, and simulated violence in user-generated videos that is not appropriately age-restricted.
- Evidence: According to eSafety research, 57% of Australian young people aged 12 to 17 have seen real, disturbing violence online.
2. Sexually Explicit or Inappropriate Material: This includes pornography and other sexualised content that is easily accessible. Young people may stumble upon it accidentally through search or be exposed to it via links and recommendations.
- Evidence: eSafety research found that nearly half of Australian children between the ages of 9 and 16 experience regular exposure to sexual images. Furthermore, a 2022 eSafety study found that 20% of young people aged 12-17 had been sent unwanted inappropriate content, such as pornography.
3. Hate Speech and Extremist Material: This category includes content that demeans or attacks individuals or groups based on their race, religion, gender, or sexuality. It also includes content that promotes extremist ideologies or terrorism.
- Evidence: Research from the eSafety Commissioner shows that 33% of Australian youth have seen images or videos online that promote terrorism.
4. Pro-Eating Disorder and Self-Harm Content: These are videos and communities that can encourage, normalise, or provide instruction for self-harm and disordered eating. While YouTube’s policies prohibit this, such content frequently evades moderation by using coded language and imagery.
5. Misinformation and Disinformation: This includes false or deliberately misleading content on topics ranging from health and science to news and politics. Young people can be particularly vulnerable to believing and sharing this information.
- Evidence: In a 2025 report, YouTube disclosed it had removed 5,169 videos uploaded from Australian IP addresses in 2024 for violating misinformation, spam, or scam policies. While this shows enforcement action, it also indicates the volume of such material being uploaded.
The Role of the Algorithm
A key reason this content is considered “widespread” is YouTube’s powerful recommendation algorithm. Its primary goal is to maximise engagement by keeping users watching. This can lead to what is known as a “rabbit hole” effect.
As Australia’s eSafety Commissioner has warned, a young person’s innocent search for a topic like fitness or gaming can, through a series of algorithmic recommendations, expose them to more extreme content, including unrealistic body standards, misogyny, or radical political views.
YouTube’s Response vs. The Scale of the Problem
To be balanced, it is crucial to note that YouTube has Community Guidelines that prohibit all of the content described above. The company invests heavily in both AI and human moderation to detect and remove billions of videos. Its official policy is to terminate channels that repeatedly violate these rules.
However, the sheer scale of YouTube—with hundreds of hours of video uploaded every minute—makes perfect moderation impossible. Harmful content consistently slips through the cracks or is uploaded faster than it can be removed.
Therefore, when authorities like the eSafety Commissioner refer to “widespread harmful content,” they are describing a reality where, despite the platform’s efforts, a significant and unacceptable amount of dangerous material reaches Australian children, making it a persistent and widespread problem.
Based on my knowledge and analysis of the current situation in Australia, the decision to exempt YouTube from the under-16 social media ban is a highly contentious issue, and whether it is “wise” depends heavily on which aspects of the platform one prioritises.
Here is a balanced view of the arguments concerning the content on YouTube and its suitability for children under 16.
The Argument for Exempting YouTube (Why it might be wise)
The government’s official rationale for exempting YouTube centres on its function as a utility for information and education, distinguishing it from platforms whose primary purpose is social interaction.
- Vast Educational and Creative Resource: YouTube is an unparalleled repository of “how-to” guides, scientific explanations, historical documentaries, artistic performances, and educational content that supports school curricula. For many young people, it is a primary tool for learning a new skill, completing homework, or exploring a topic of interest. Banning it would remove access to this significant and largely free resource.
- Passive Consumption vs. Active Social Interaction: The government and other proponents of the exemption argue that YouTube can be used primarily for passive video consumption, much like a streaming service (e.g., Netflix or ABC iView). This is different from platforms like Instagram or TikTok, which are fundamentally designed around active social networking, user status, and reciprocal engagement. The most significant harms are often linked to this active social pressure, rather than passive viewing.
- The Existence of YouTube Kids: Google has invested in a dedicated, filtered platform, YouTube Kids, which provides a more controlled environment. It allows parents to create profiles based on age groups (preschool, younger, older) and offers enhanced parental controls, including the ability to hand-pick channels and disable autoplay and search functions.
- Positive Community and Connection: Beyond formal education, YouTube provides valuable communities for young people, particularly those with niche interests or those who may feel isolated. It can be a source of connection and support, for instance, for LGBTQ+ youth or those with chronic illnesses.
The Argument Against Exempting YouTube (Why it might be unwise)
Critics, including rival social media companies and numerous child safety experts, argue that the exemption is illogical and undermines the core purpose of the legislation.
- Powerful Social Media Features: A user with a YouTube account experiences many of the same features that are central to the platforms being banned. This includes algorithmic content recommendations, comments sections (which can be sources of cyberbullying and toxic discourse), live-streaming with real-time chat, and community posts from creators that function like social media feeds.
- The Danger of the Algorithm: YouTube’s recommendation algorithm is notoriously powerful. While it can be a tool for discovery, it has also been heavily criticised for its potential to lead users, including children, down “rabbit holes” of increasingly extreme, inappropriate, or harmful content. A search on a neutral topic can, through a series of recommended videos, end up on content promoting misinformation, conspiracy theories, misogyny, or extremist ideologies.
- Widespread Harmful Content: Despite its community guidelines, a vast amount of content that is unsuitable for children under 16 exists on the main platform. This includes graphic violence, sexually suggestive material, promotion of eating disorders, and dangerous “challenges.” Research from Australia’s eSafety Commissioner shows that a significant majority of teens have been exposed to harmful content online, and YouTube is the most popular platform among this age group.
- Commercialisation and Parasocial Relationships: The platform’s influencer-driven model can blur the lines between entertainment and advertising, making young people vulnerable to sophisticated marketing. It also fosters intense parasocial relationships with creators, which can have negative psychological effects.
Conclusion: A Matter of Definition and Balancing Harms
Ultimately, the wisdom of exempting YouTube is not a straightforward matter. It hinges on a definitional debate: Is YouTube a video library that happens to have social features, or is it a social media platform that happens to host long-form videos?
The government has chosen the former definition, prioritising YouTube’s utility as an educational and informational tool. The decision implies a belief that the potential harms can be managed through other means—such as the existing powers of the eSafety Commissioner to order the removal of illegal and restricted content, the promotion of the YouTube Kids app, and parental guidance.
However, critics make a compelling case that by exempting the most popular video platform among young people—one that contains powerful algorithmic and social features linked to online harms—the ban may be significantly weakened in its overall effectiveness. The decision places a very high degree of trust in YouTube’s own content moderation systems and in the ability of parents and children to navigate a complex and, at times, hazardous content environment.
Taking the perspective of a national leader, the decision on whether to grant YouTube an exemption from the under-16 social media ban is one of the most complex balancing acts in modern regulation. The core of the issue is not a simple choice between “child safety” and “access to information,” but a multi-faceted risk assessment where both potential action and inaction carry significant consequences.
As Prime Minister, my decision would not be based on a single factor, but on weighing the severe, documented harms against the platform’s societal role and the very real risks of a ban backfiring. Here is the framework I would use to make that determination, acknowledging that children’s lives and well-being are the paramount concern.
The Case for Including YouTube in the Ban
This argument is straightforward and powerful, resting on the primary duty of the government to protect its most vulnerable citizens.
- Equivalent Harm, Equivalent Treatment: The core rationale for the ban is to shield children from harms endemic to social media platforms: addictive design, cyberbullying, exposure to age-inappropriate content, and the mental health impacts of algorithmic content funnels. Evidence from Australia’s eSafety Commissioner and numerous global studies shows that YouTube exhibits these same features and propagates the same harms. Its algorithm can lead young users to extremist content, pro-eating disorder material, and graphic violence. Therefore, exempting the most-used video platform by Australian youth could be seen as an illogical and glaring loophole that undermines the very purpose of the law.
- The “Social” Nature of Modern YouTube: While it began as a video library, YouTube is now a deeply social platform. Features like YouTube “Shorts” are, as competitors rightly argue, “virtually indistinguishable” from TikTok. Live-stream chats, community posts, and the comment sections are major hubs of social interaction and, consequently, risk. To ignore this evolution is to ignore the reality of how young people use the platform.
- Setting a Clear Standard: Including YouTube would send an unambiguous message to all digital platforms: if your service uses addictive, algorithm-driven social features and is used by millions of children, you are subject to the highest standards of safety and responsibility. An exemption risks creating an unlevel playing field and suggests that some forms of algorithm-driven harm are more acceptable than others.
The Case for Exempting YouTube (and the Risks of a Ban)
This side of the ledger focuses on the platform’s unique utility and the significant, unintended consequences that a blanket ban could create.
- Crippling a Core Educational and Cultural Tool: Unlike other platforms, YouTube functions as essential public-facing infrastructure for education, information, and culture. It’s used in classrooms (via Google Classroom), for homework help, for learning practical skills, and for accessing health information from trusted sources like Kids Helpline. A ban would instantly cut off a generation from a primary learning resource, disproportionately affecting those without access to paid alternatives.
- The Risk of Driving Behaviour Underground: A ban on a platform as ubiquitous as YouTube is unlikely to be 100% effective. As the eSafety Commissioner has warned, determined teens will use VPNs or other workarounds. This creates a more dangerous situation: their activity becomes hidden from parents and carers, and they are pushed into less-regulated, potentially more dangerous corners of the internet. If a child encounters harm on a banned service they are accessing secretly, they are far less likely to seek help from a trusted adult.
- Harming Marginalised Youth: For many young people, particularly those in regional areas, from culturally diverse backgrounds, or the LGBTQ+ community, online platforms like YouTube provide a crucial lifeline. They find community, see representation, and access support they cannot find in their immediate physical environment. Research shows these connections can be vital for mental well-being. A ban risks severing these positive ties, potentially increasing feelings of isolation.
- Practicality and Proportionality: A key difference from other platforms is that much of YouTube’s content can be consumed passively without an account. The government’s ban targets the creation of accounts by under-16s. Acknowledging this difference in use, and combining the existing powers of the Online Safety Act to remove harmful content with the promotion of tools like YouTube Kids and parental controls, could be seen as a more proportionate response than an outright ban.
The Decision-Making Calculus
As Prime Minister, my responsibility is to choose the path that leads to the least amount of net harm.
- Including YouTube in the ban would directly address the source of significant, documented harm to children. The moral clarity is high. However, it comes with the severe risk of collateral damage to education, digital literacy, and the well-being of marginalised youth, while potentially being ineffective and creating new, hidden dangers.
- Exempting YouTube acknowledges its vital societal role and avoids the severe unintended consequences of a ban. However, it relies heavily on the platform’s own moderation efforts (which are demonstrably imperfect) and the digital literacy of parents. It would attract valid criticism that the government is failing to address a known source of harm and giving a “sweetheart deal” to a tech giant.
Conclusion, from a Leadership Perspective
Given the evidence, the decision is incredibly difficult, with no perfect, risk-free option.
A leader focused purely on mitigating direct, known harms from social media features would lean towards including YouTube in the ban. The argument that it functions identically to other banned platforms in many respects is compelling.
However, a leader taking a broader view of societal well-being and the real-world behaviour of children would likely find the risks of a ban—driving activity underground, cutting off educational resources, and isolating vulnerable youth—to be potentially more catastrophic than the risks of a regulated, monitored exemption.
Therefore, if forced to make a choice, a responsible leader would likely pursue a heavily conditional exemption. This would involve holding YouTube to the absolute highest safety standards under the existing Online Safety Act, demanding radical transparency on its algorithm and content moderation, investing massively in digital literacy programs for parents and children, and explicitly reserving the right to revoke the exemption if these measures fail to demonstrably protect Australian children. It is a compromise, but one that attempts to navigate the immense complexities and avoid causing a new, potentially worse set of harms.
That’s the end of the transcript.
It’s like Gemini has been reading my mind. And good luck with that “heavily conditional exemption”.
Time for a complete rethink of this whole thing I think.
Trev is a Technology Commentator, Dad, Speaker and Rev Head.
He produces and hosts two popular podcasts, EFTM and Two Blokes Talking Tech. He also appears on over 50 radio stations across Australia weekly, and is the resident Tech Expert on Channel 9’s Today Show each day and appears regularly on A Current Affair.
Father of three, he is often found down in his Man Cave.
If you're still catching up on Quantum Dots, OLED, Mini LED and are thinking that…
The NBN is getting ready to lock in the final stage of the Fibre to…
BMW has revealed the first new model to be designed and engineered under their 2025…
The SwannBuddy4K Video Doorbell with SwannShield™AI Voice Assistant has been given an Honouree accolade for…
Reolink is a bit of an upstart when it comes to home security but having…
LG has announced its vision for AI-powered living at IFA 2025 under the moniker “LG…