Parliament adjourned for the summer last week, meaning both the House of Commons and Senate are largely on hold until mid-September. The Law Bytes podcast focuses intensively on Canadian legislative and digital policy developments and with another Parliamentary year in the books, this week’s episode takes a look back and take stock of where things stand. It features discussion on the implementation of the Internet streaming and news bills (C-11 and C-18) as well as an analysis of the current state of privacy, AI, online harms, and digital tax as found in Bills C-27, C-63, C-69, S-210 and C-27.

DANGER INTERNETS AHEAD by Les Orchard (CC BY-NC 2.0) https://flic.kr/p/cSsSX
Online Harms
Road to Nowhere: Parliament Breaks For the Summer With Little Accomplished on Digital Policy
The House of Commons adjourned for the summer yesterday with most committees and House debate on hold until mid-September. The government talked up its accomplishments, but on the digital policy front there was little to promote. The government’s most controversial digital-related bills including online harms (Bill C-63) and privacy and AI regulation (Bill C-27) barely moved during the session, a function of badly bloated legislation that create at least as many problems as they solve. With an election a little more than a year away, the clock is ticking and many legislative proposals will be hard pressed to become law.
Where do things stand on the key pieces of legislation?
Debating the Online Harms Act: Insights from Two Recent Panels on Bill C-63
The Online Harms Act has sparked widespread debate over the past six weeks. I’ve covered the bill in a trio of Law Bytes podcast (Online Harms, Canada Human Rights Act, Criminal Code) and participated in several panels focused on the issue. Those panels are posted below. First, a panel titled the Online Harms Act: What’s Fact and What’s Fiction, sponsored by CIJA that included Emily Laidlaw, Richard Marceau and me. It paid particular attention to the intersection between the bill and online hate.
Better Laws, Not Bans: Why a TikTok Ban is a Bad Idea
New legislation making its way through the U.S. Congress has placed a TikTok ban back on the public agenda. The app is already prohibited on government devices in Canada, the government has quietly conducted a national security review, and there are new calls to ban it altogether from the Canadian market. While it might be tempting for some politicians to jump on the bandwagon, a ban would be a mistake. There are legitimate concerns with social media companies, but there simply hasn’t been convincing evidence that TikTok currently raises a national security threat nor that it poses a greater risk than any other social media service. The furor really seems to be a case of economic nationalism – a desire to deny a popular Chinese service access to the U.S. market – rather than a genuine case that TikTok poses a unique privacy and security threat. Taken at face value, however, the case against TikTok comes down to a simple concern: its owner, ByteDance, is a Chinese company that could theoretically be required to disclose user information to the Chinese government or compelled to act on its behalf. The proposed U.S. law therefore would require that TikTok be sold within six months or face a ban.
While the concerns associated with TikTok given its Chinese connection and popularity with younger demographics are well known, the privacy and security case against it is very weak.
Government Gaslighting Again?: Unpacking the Uncomfortable Reality of the Online Harms Act
The Online Harms Act was only introduced two weeks ago, but already it appears that the government is ready to run back the same playbook of gaslighting and denials that plagued Bills C-11 and C-18. Those bills, which addressed Internet streaming and news, faced widespread criticism over potential regulation of user content and the prospect of blocked news links on major Internet platforms. Rather than engage in a policy process that took the criticism seriously, the government ignored digital creators (including disrespecting indigenous creators) and dismissed the risks of Bill C-18 as a bluff. The results of that strategy are well-known: Bill C-11 required a policy direction fix and is mired in a years-long regulatory process at the CRTC and news links have been blocked for months on Meta as the list of Canadian media bankruptcies and closures mount.
Bill C-63, the Online Harms Act, offered the chance for a fresh start given that the government seemed to accept the sharp criticism of its first proposal, engaging in a more open consultative process in response. As I noted when the bill was first tabled, the core of the legislation addressing the responsibility of Internet platforms was indeed much improved. Yet it was immediately obvious there were red flags, particularly with respect to the Digital Safety Commission charged with enforcing the law and with the inclusion of Criminal Code and Human Rights Act provisions with overbroad penalties and the potential to weaponize speech complaints. The hope – based on the more collaborative approach used to develop the law – was that there would be a “genuine welcoming of constructive criticism rather than the discouraging, hostile processes of recent years.” Two weeks in that hope is rapidly disappearing.