
DANGER INTERNETS AHEAD by Les Orchard (CC BY-NC 2.0) https://flic.kr/p/cSsSX
Online Harms
Why the Online Harms Act is the Wrong Way to Regulate AI Chatbots
In the wake of reports that AI Minister Evan Solomon may press AI companies such as OpenAI to more aggressively report potential safety risks identified in private chats to law enforcement, attention has quickly turned to the Online Harms Act as a potential regulatory solution. The Online Harms Act or Bill C-63, died on the order paper last year, but is expected to return in some form in the coming months. Given that the Act is tailor made to address online harms, it isn’t surprising that some would suggest that it could be expanded to cover AI chatbots.
Yet the law was deliberately designed to avoid doing what politicians want the AI companies to do as it expressly exempted private communications and proactive monitoring from its scope. Indeed, applying the Online Harms Act to AI chatbots would not simply extend existing online safety rules to a new technology. It would require dismantling core privacy safeguards which were added after the government’s earlier online harms proposal faced widespread criticism for encouraging platform monitoring and rapid reporting to law enforcement. In effect, proposals to use online harms to regulate AI chatbots risks reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.
More Transparency Not Police Reporting: Navigating the Safety-Privacy Balance for AI ChatBots
My Globe and Mail op-ed begins by noting that AI Minister Evan Solomon summoned executives from OpenAI to Ottawa last week to explain why the company declined to alert police that it had flagged the account of Jesse Van Rootselaar, the Tumbler Ridge shooter who killed eight people earlier this month. The company stopped short of warning authorities, concluding that the account activity did not meet its standard of an “imminent and credible risk of serious physical harm to others.” After the meeting, Mr. Solomon expressed disappointment with OpenAI, saying the company had not presented “substantial new safety protocols.” Justice Minister Sean Fraser said it expects OpenAI to make changes, or else the government would step in to regulate artificial intelligence companies.
The desire to hold someone responsible for the potential prevention of the Tumbler Ridge tragedy is understandable. Add in the mounting pressure for AI regulation, and OpenAI makes for a perfect target for blame and threats of government action. Yet holding AI chatbots liable for reporting to police what users privately post in their conversations creates its own risks, undermining privacy and effectively encouraging heightened corporate surveillance.
Government Reveals Digital Policy Priorities in Trio of Responses to Canadian Heritage Committee Reports
The Canadian government has responded to three reports focused on digital policies from the Standing Committee on Canadian Heritage, shedding new light on potential future policies and priorities. The three reports – on tech giants, local media, and harms caused by illegal sexually explicit materials posted online – recommended a wide range of measures that include new laws, regulations, and government programs. The government sidesteps some of the recommended legislative reforms in its responses signed by Heritage Minister Marc Miller, suggesting limited interest in committing to broad-based platform liability rules.
The Law Bytes Podcast, Episode 255: Grappling with Grok – Heidi Tworek on the Limits of Canadian Law
The Law Bytes podcast is back, starting with an episode on the limits of Canadian law in addressing the concerns associated with Grok AI, the AI chatbot that garnered global attention over the widespread creation and distribution of AI-generated sexualized deep fakes. Weaving together online harms, privacy, AI regulation, and platform regulation into a single issue, there have been service bans in some countries but Canada has thus far struggled to respond.
To help understand what has taken place and Canada’s law and policy options, Professor Heidi Tworek returns to the Law Bytes podcast. Professor Tworek is the Canada Research Chair and Professor of History and Public Policy at the University of British Columbia, where she also directs the Centre for the Study of Democratic Institutions. Her work explores how new communications technologies affect democracy in the past and present and she served on the government’s online harms advisory board.
Canada’s DST Debacle a Case Study of Digital Strategy Trouble
My Globe and Mail op-ed opens by noting that after years of dismissing the warnings of likely retaliation, the Canadian government caved to U.S. pressure earlier this week as it cancelled the digital services tax. Faced with the U.S. suspension of trade negotiations, Finance Minister François-Philippe Champagne announced that the government would rescind the legislation that created it.











