Addicted To Social Media ? by Joey Zanotti https://flic.kr/p/2aVc5Wi CC BY 2.0

Addicted To Social Media ? by Joey Zanotti https://flic.kr/p/2aVc5Wi CC BY 2.0

News

The Illusion of Protection: Why Canada’s Growing Push to Ban Social Media for Kids Won’t Work

The momentum behind a social media ban for Canadian minors has been building for months. The federal Liberals voted at their April policy convention to back a minimum age of 16 for social media accounts and AI chatbots, the government’s expert panel on online safety is studying the issue, protesters have now rallied on Parliament Hill calling for it, and on Saturday night, Manitoba Premier Wab Kinew told a Winnipeg fundraiser that his government will be the first in Canada to ban social media and AI chatbots for kids. Kinew did not specify which kids, when it would happen, or how it would be enforced, none of which appeared to matter to the audience. The political appeal of a ban is obvious, since concerns about social media’s effects on young users are widely shared. Yet the policy itself is a terrible idea that will not work. This post examines at least six reasons why an outright age-based ban, particularly one that extends to AI chatbots, is the wrong response to a serious issue.

1. The ban lets social media off the hook

The concerns associated with social media are real, but they are not unique to young users. Algorithmic manipulation, addictive engagement design, inadequate content moderation, inconsistent policy enforcement, insufficient transparency, and privacy risks affect users of every age, and treating them as a children’s problem misidentifies both the source of the harm and the appropriate target of regulation. By focusing legislative attention on who is permitted to use social media rather than on how the platforms operate, an age-based ban functions as a pressure-relief valve for legislators and a gift to the companies, since it allows them to maintain existing practices while shifting the regulatory conversation to age-gating mechanisms that the platforms themselves will administer. The harder but more effective response, as I argued in the Globe and Mail in the wake of the recent Meta and YouTube verdict, is to regulate the platforms through algorithmic transparency requirements and an enforceable duty to act responsibly standard. A ban on kids using social media leaves the platforms largely untouched and merely changes who is allowed to use it.

2. The ban does not work

Australia is the test case for a ban and the experience to date suggests it does not work. Its ban on social media for under-16s came into effect in December 2025, and three months in, the eSafety Commissioner’s first compliance report provides the empirical record. Among children who had social media accounts on major platforms before the ban, roughly 70 per cent retained access to at least one three months later, with platform-specific retention rates of 63.6 per cent on Facebook, 69.1 per cent on Instagram, 69.4 per cent on Snapchat, and 69.3 per cent on TikTok. The eSafety Commissioner found no discernible reduction in cyberbullying or image-based abuse complaints from under-16 users since the ban took effect. The law itself has faced legal challenges and global criticism. For example, Reddit and several other platforms are mounting High Court challenges to the ban on free expression grounds. Amnesty International has described Australia’s approach as an “ineffective quick fix” that infringes on children’s rights to expression, information, and participation, while UNICEF Australia objected on similar grounds. Indeed, Professor Lisa Given laid out much of this evidence on a recent Law Bytes podcast episode before most of the data was even in. Canadian politicians now citing the Australian model with approval are pointing to a model whose own regulator’s data suggests does not work.

3. The ban creates its own harms

Even where a ban changes behaviour at the margins, it does so by imposing real costs on everyone. The technologies that politicians are tacitly relying on to enforce it, such as mandatory age verification or age estimation, represent an enormous privacy risk. Where age verification technology is mandated, mandatory ID submission is required for all users, whether children or adults. In other words, the system would require tens of millions of Canadians to submit their ID to third party providers in order to use social media (or AI if extended to chatbots). These providers are typically outside of Canada which makes it difficult to apply Canadian privacy law to the data collection. A security breach for any such provider exposes government-issued identification documents to the entire internet, as the Discord breach in October 2025 demonstrated when the IDs of roughly 70,000 users were leaked online. Hundreds of scientists and technology experts signed an open letter earlier this year calling for a moratorium on mandatory age assurance and warning of catastrophic breach potential, as I have argued in detail and Ian Goldberg explained on my Law Bytes podcast.

Where age estimation is used instead, the risks are even greater, including additional surveillance of users to better estimate their age since the technology cannot reliably distinguish 16 year olds from 17 year olds and has documented accuracy problems for darker skin tones. This additional surveillance tracks the user’s friends, their posts, messages, and any other indicia to obtain a more accurate guess of their age. In the name of greater protection, the technology puts people at greater risk.

The harms also fall on the kids the ban is meant to help, since experience suggests that teens do not go offline when banned from major platforms but instead migrate to smaller and less-moderated platforms with smaller safety teams and where regulators have less leverage. Moreover, free VPN tools used to circumvent these laws come with their own data-collection and malware risks that children are even less equipped to evaluate than the platforms they are leaving. Social media is also a documented lifeline for marginalized youth, including LGBTQ+ youth in non-affirming environments, who rely on it for identity development and peer support, with the result that pushing kids off mainstream services hits hardest kids with the fewest alternatives.

The Manitoba announcement and the federal Liberal resolution would extend the ban to AI chatbots, which makes the verification problem even worse. AI tools are increasingly central to how people learn, work, and find information. They are integrated into search, productivity software, educational platforms, and a growing list of services. The technical infrastructure required to verify user ages on every AI-enabled service in a privacy-protective manner does not yet exist, and neither does the privacy law framework needed to govern the verification data such infrastructure would generate.

4. The polling on bans does not say what proponents claim

Saskatchewan Premier Scott Moe, who is consulting on the issue, recently joined a chorus of politicians citing a March 2026 Angus Reid Institute survey to explain his interest in a ban. The headline figure is real: three-quarters of respondents told Angus Reid they support a “full ban on social media use for anyone under the age of 16.” The other numbers in the same survey, less frequently quoted, complicate the picture, however. 72 per cent of respondents said parents, not governments, should be primarily responsible for regulating teens’ social media use, which directly contradicts the support for a ban. Moreover, only 32 per cent picked 16 as the right age threshold (with near-equal shares picking 10-12, 14, and 15), and the survey did not ask respondents anything about the actual mechanism (mandatory age verification, ID submission to third-party providers, biometric scanning) that any ban would have to use in practice. As Professor Sara Grimes of McGill University, who directs the Kids Play Tech Lab, discusses in a recent explainer, this is not the unambiguous mandate political leaders are now treating it as, since public support for “protect kids from harm” is not the same as public support for “every Canadian must submit ID to a third-party provider in order to use the internet,” and politicians are conflating the two.

5. Provincial bans make everything worse

Setting aside whether bans work, creating a patchwork of provincial social media laws with varying age thresholds, verification regimes, enforcement bodies, and definitions of what counts as a regulated service would be disastrous, forcing platforms to maintain separate compliance systems for each province or to block Canadian users out of services entirely. The first option imposes compliance costs that smaller services, in particular, cannot bear, while the second risks Canadians losing access to services that other countries treat as ordinary infrastructure (we’ve seen this movie before with the Online News Act and lost access to news links).

6. Kids have constitutional rights too

The discussion of social media bans for minors typically treats children as objects of protection rather than as rights-holders, but they are both, and Canadian and international law has been moving steadily toward recognizing the latter. Section 2(b) of the Charter of Rights and Freedoms protects freedom of expression, with the result that a law blocking an entire age cohort from lawful platforms necessarily restricts a Charter-protected interest of the very people the legislation claims to protect. The UN Committee on the Rights of the Child affirmed in General Comment 25 (2021) that children’s rights apply in the digital environment and include rights to information, expression, association, and participation, and the academic and policy literature on age-appropriate design has built on that foundation in arguing that children’s rights are a constraint on regulators as much as on platforms. The Charter problem is sharpened by what a ban actually does in practice, since it blocks all lawful content on a regulated platform for everyone in the affected age group, regardless of individual circumstance, parental consent, or the nature of the use. As noted above, Australia’s High Court is currently hearing a constitutional challenge to its under-16 ban on free expression grounds, and Canadian courts will face the same questions if Manitoba or any other government legislates without first showing that the policy can survive the constitutional scrutiny it will inevitably attract.

Strip away the political theatre and what remains is this: the ban will not keep most kids off the platforms (Australia is showing as much in real time), it will not measurably reduce the harms (the Australian regulator has yet to find evidence that it has), and it will impose privacy and free expression costs on every Canadian who wants to use ordinary social media services while leaving the underlying platform problems untouched, because the legislation does not actually require platforms to change anything about their products beyond who is allowed to log in. That isn’t real protection. It’s an illusion that should be rejected by politicians and advocates alike, in favour of policies more likely to address their concerns about protecting children from online harms.

Leave a Reply

Your email address will not be published.

*

*