Latest Posts

fight antisemitism by Julia Tulke CC BY-NC-SA 2.0 https://flic.kr/p/njohj3

Words Are Not Enough: Countering Relentless Antisemitic Violence in Canada With Action

On a hot August day nearly 32 years ago, I was married at the Shaarei Shomayim synagogue in Toronto. My Globe and Mail op-ed notes that I leafed through my wedding album this weekend as I grappled with the news that gunfire targeted the synagogue on Friday night, the third such attack on a synagogue in Toronto in a matter of days. The photos of my grandparents – Holocaust survivors who rebuilt their lives in Canada – looked back at me as if to warn that the risks are real.

The gun violence sparked the usual political tweets denouncing the shooting, pledging support, and unconvincingly stating that antisemitism has no place in Canada. Yet the predominant emotion that would have once greeted this news – shock – is no more. Over the past two-and-a-half years, Canadian Jewish communities from coast to coast have faced relentless antisemitic incidents: schools hit with gunfire, synagogues firebombed, community centres and old-age homes vandalized, hospitals protested, summer camps threatened, Jewish students and campus groups vilified, and Jewish-owned businesses boycotted.

Read more ›

March 11, 2026 0 comments Columns
privacy by Alan Cleaver https://flic.kr/p/7fNVzm CC BY 2.0

The Law Bytes Podcast, Episode 260: What the Government Didn’t Want You To Hear About Bill C-4 And Its Weak Political Party Privacy Rules

Last spring, the government quietly inserted provisions that exempt political parties from the application of privacy protections in Bill C-4, an “affordability measures” bill. The government barely acknowledged the provision in its study of the bill at the House of Commons and refused to even hear witnesses on the issue. The Senate didn’t play along however. It conducted hearings on the privacy rules and the Senators didn’t like what they heard, amending the bill by including a sunset clause on the privacy provisions that gives that the government three years to come up with something better. The bill heads back to the House of Commons, where the government can either accept the change and have the bill pass or reject the change and send it back again to the Senate.

This Law Bytes podcast episode tells the story of what the Senate heard on Bill C-4. It is what the government did not want Canadians to hear and would prefer to ignore altogether. There were witnesses from advocacy groups, but the episode focuses on testimony from privacy commissioners (current and former) along with Elections Canada leadership.

Read more ›

March 9, 2026 2 comments Podcasts
2023 US-Canada Summit by Eurasia Group https://flic.kr/p/2osjLzX CC BY 2.0

Why the Online Harms Act is the Wrong Way to Regulate AI Chatbots

In the wake of reports that AI Minister Evan Solomon may press AI companies such as OpenAI to more aggressively report potential safety risks identified in private chats to law enforcement, attention has quickly turned to the Online Harms Act as a potential regulatory solution. The Online Harms Act or Bill C-63, died on the order paper last year, but is expected to return in some form in the coming months. Given that the Act is tailor made to address online harms, it isn’t surprising that some would suggest that it could be expanded to cover AI chatbots.

Yet the law was deliberately designed to avoid doing what politicians want the AI companies to do as it expressly exempted private communications and proactive monitoring from its scope. Indeed, applying the Online Harms Act to AI chatbots would not simply extend existing online safety rules to a new technology. It would require dismantling core privacy safeguards which were added after the government’s earlier online harms proposal faced widespread criticism for encouraging platform monitoring and rapid reporting to law enforcement. In effect, proposals to use online harms to regulate AI chatbots risks reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.

Read more ›

March 4, 2026 4 comments News
OpenAI logo by ishmael daro https://flic.kr/p/2oZaMAk CC BY 2.0

More Transparency Not Police Reporting: Navigating the Safety-Privacy Balance for AI ChatBots

My Globe and Mail op-ed begins by noting that AI Minister Evan Solomon summoned executives from OpenAI to Ottawa last week to explain why the company declined to alert police that it had flagged the account of Jesse Van Rootselaar, the Tumbler Ridge shooter who killed eight people earlier this month. The company stopped short of warning authorities, concluding that the account activity did not meet its standard of an “imminent and credible risk of serious physical harm to others.” After the meeting, Mr. Solomon expressed disappointment with OpenAI, saying the company had not presented “substantial new safety protocols.” Justice Minister Sean Fraser said it expects OpenAI to make changes, or else the government would step in to regulate artificial intelligence companies.

The desire to hold someone responsible for the potential prevention of the Tumbler Ridge tragedy is understandable. Add in the mounting pressure for AI regulation, and OpenAI makes for a perfect target for blame and threats of government action. Yet holding AI chatbots liable for reporting to police what users privately post in their conversations creates its own risks, undermining privacy and effectively encouraging heightened corporate surveillance.

Read more ›

March 3, 2026 2 comments Columns
What a great read by @stephen_wolfram@twitter.com 😎 “What is ChatGPT doing… and why does it work?” by David Roessli  CC BY-NC-SA 2.0 https://flic.kr/p/2oEJVLM

The Law Bytes Podcast, Episode 259: The Privacy and Surveillance Risks of AI Chatbot Reporting to Police

Over the past ten days, Canada has witnessed one of the fastest-moving technology policy debates in recent memory. What began as reporting about a tragic act of violence – the shootings in Tumbler Ridge, BC –  quickly evolved into questions about AI safety, corporate responsibility, police reporting obligations, and now potential AI regulation.

This week’s Law Bytes podcast is a bit different from the norm. Building off my Globe and Mail op-ed, I walk through what has happened thus far, examine the potential policy responses, and explain why both the Online Harms Act and current AI legislative models are poorly suited to this problem, and argue that Canada instead needs to start thinking seriously instead about an AI Transparency Act.

Read more ›

March 2, 2026 1 comment Podcasts