Archive for March, 2026

OpenAI logo by ishmael daro https://flic.kr/p/2oZaMAk CC BY 2.0

More Transparency Not Police Reporting: Navigating the Safety-Privacy Balance for AI ChatBots

My Globe and Mail op-ed begins by noting that AI Minister Evan Solomon summoned executives from OpenAI to Ottawa last week to explain why the company declined to alert police that it had flagged the account of Jesse Van Rootselaar, the Tumbler Ridge shooter who killed eight people earlier this month. The company stopped short of warning authorities, concluding that the account activity did not meet its standard of an “imminent and credible risk of serious physical harm to others.” After the meeting, Mr. Solomon expressed disappointment with OpenAI, saying the company had not presented “substantial new safety protocols.” Justice Minister Sean Fraser said it expects OpenAI to make changes, or else the government would step in to regulate artificial intelligence companies.

The desire to hold someone responsible for the potential prevention of the Tumbler Ridge tragedy is understandable. Add in the mounting pressure for AI regulation, and OpenAI makes for a perfect target for blame and threats of government action. Yet holding AI chatbots liable for reporting to police what users privately post in their conversations creates its own risks, undermining privacy and effectively encouraging heightened corporate surveillance.

Read more ›

March 3, 2026 1 comment Columns
What a great read by @stephen_wolfram@twitter.com 😎 “What is ChatGPT doing… and why does it work?” by David Roessli  CC BY-NC-SA 2.0 https://flic.kr/p/2oEJVLM

The Law Bytes Podcast, Episode 259: The Privacy and Surveillance Risks of AI Chatbot Reporting to Police

Over the past ten days, Canada has witnessed one of the fastest-moving technology policy debates in recent memory. What began as reporting about a tragic act of violence – the shootings in Tumbler Ridge, BC –  quickly evolved into questions about AI safety, corporate responsibility, police reporting obligations, and now potential AI regulation.

This week’s Law Bytes podcast is a bit different from the norm. Building off my Globe and Mail op-ed, I walk through what has happened thus far, examine the potential policy responses, and explain why both the Online Harms Act and current AI legislative models are poorly suited to this problem, and argue that Canada instead needs to start thinking seriously instead about an AI Transparency Act.

Read more ›

March 2, 2026 0 comments Podcasts