The government’s online harms bill, led by Canadian Heritage, is likely to be introduced in the coming weeks. My series on why the department faces a significant credibility gap on the issue opened with a look at its misleading and secretive approach to the 2021 online harms consultation, including its decision to only disclose public submissions when compelled to do so by law and releasing a misleading “What We Heard” report that omitted crucial information. Today’s post focuses on another Canadian Heritage consultation which occurred months later on proposed anti-hate plans. As the National Post reported earlier this year, after the consultation launched, officials became alarmed when responses criticizing the plan and questioning government priorities began to emerge. The solution? The department remarkably decided to filter out the critics from participating in the consultation by adding a new question that short-circuited it for anyone who responded that they did not think anti-hate measures should be a top government priority.
Archive for April 12th, 2023

Law Bytes
The Law Bytes Podcast, Episode 248: Mark Surman on Why Canada's AI Strategy Should Prioritize Public AI Models
byMichael Geist

November 3, 2025
Michael Geist
October 27, 2025
Michael Geist
October 20, 2025
Michael Geist
October 6, 2025
Michael Geist
September 22, 2025
Michael Geist
Search Results placeholder
Recent Posts
The Law Bytes Podcast, Episode 249: The Debate Over Canada’s AI Strategy – My Consultation Submission and Appearance at the Canadian Heritage Committee
How the Liberal and Conservative Parties Have Quietly Colluded to Undermine the Privacy Rights of Canadians
The Law Bytes Podcast, Episode 248: Mark Surman on Why Canada’s AI Strategy Should Prioritize Public AI Models
We Need More Canada in the Training Data: My Appearance Before the Standing Committee on Canadian Heritage on AI and the Creative Sector
The Law Bytes Podcast, Episode 247: My Senate Appearance on the Bill That Could Lead to Canada-Wide Blocking of X, Reddit and ChatGPT

