The momentum behind a social media ban for Canadian minors has been building for months. The federal Liberals voted at their April policy convention to back a minimum age of 16 for social media accounts and AI chatbots, the government’s expert panel on online safety is studying the issue, protesters have now rallied on Parliament Hill calling for it, and on Saturday night, Manitoba Premier Wab Kinew told a Winnipeg fundraiser that his government will be the first in Canada to ban social media and AI chatbots for kids. Kinew did not specify which kids, when it would happen, or how it would be enforced, none of which appeared to matter to the audience. The political appeal of a ban is obvious, since concerns about social media’s effects on young users are widely shared. Yet the policy itself is a terrible idea that will not work. This post examines at least six reasons why an outright age-based ban, particularly one that extends to AI chatbots, is the wrong response to a serious issue.
Latest Posts
The Law Bytes Podcast, Episode 266: Justin Safayeni on the Ontario Government’s Overnight Evisceration of Access to Information
Just over a month ago, the Ford government tabled Bill 97, an omnibus bill with provisions fundamentally restructuring Ontario’s access to information system. Information and Privacy Commissioner Patricia Kosseim responded with alarm, but the government rushed ahead with no hearings or public debate. The most significant rewrite of Ontario’s access to information regime in nearly forty years became law within weeks. Justin Safayeni, a partner at Stockwoods LLP in Toronto, is one of Canada’s leading practitioners in access to information and media law. He joins me on the Law Bytes podcast to make sense of what just happened and what comes next.
AI Without Canada: Why the Heritage Committee’s AI Report Could Lead to Less Canadian Content in the Training Data
When I appeared before the Standing Committee on Canadian Heritage last fall for its study on AI and the creative industries, I emphasized that the large language models and generative AI systems that are reshaping how people access information, culture, and entertainment are only as representative as the data on which they are trained. If Canadian works, perspectives, and cultural content are absent from those models, Canada risks disappearing in the AI-mediated world. The committee’s report, released this month, acknowledges this concern, but its lead recommendation risks making the situation worse.
Addressing the AI Policy Challenge: My Appearance before the Standing Senate Committee on Transport and Communications
Earlier this week, I appeared before the Standing Senate Committee on Transport and Communications as part of its study on AI regulation. This follows earlier appearances before the House of Commons Heritage and Industry committees on the same issue. The hearing led to robust exchanges with multiple Senators on the intersection of AI policy with issues such as privacy, copyright, online harms, and sovereignty. I plan to post clips from the hearing in a future Law Bytes podcast, but in the meantime, my opening statement provides a good sense of my views on AI regulation with respect to privacy, copyright, and the need for an AI Transparency Act. A video of the opening statement is embedded below, followed by the text.


















