My Globe and Mail op-ed begins by noting that AI Minister Evan Solomon summoned executives from OpenAI to Ottawa last week to explain why the company declined to alert police that it had flagged the account of Jesse Van Rootselaar, the Tumbler Ridge shooter who killed eight people earlier this month. The company stopped short of warning authorities, concluding that the account activity did not meet its standard of an “imminent and credible risk of serious physical harm to others.” After the meeting, Mr. Solomon expressed disappointment with OpenAI, saying the company had not presented “substantial new safety protocols.” Justice Minister Sean Fraser said it expects OpenAI to make changes, or else the government would step in to regulate artificial intelligence companies.
The desire to hold someone responsible for the potential prevention of the Tumbler Ridge tragedy is understandable. Add in the mounting pressure for AI regulation, and OpenAI makes for a perfect target for blame and threats of government action. Yet holding AI chatbots liable for reporting to police what users privately post in their conversations creates its own risks, undermining privacy and effectively encouraging heightened corporate surveillance.
Most global AI regulation has to date adopted a risk-based analysis that seeks to mitigate potential harms. The European Union’s AI Act classifies AI systems on a spectrum of risk with steadily increasing regulations for those that pose the highest risks. General purpose AI systems such as ChatGPT face a range of regulatory requirements given their potential impact, but are not treated as high-risk AI systems.
Debates over regulating AI content typically focus on the potential harms that may arise from AI chatbot outputs. Last year the U.S. Congress held hearings on regulating outputs involving suicide or self-harm after a 16-year-old teen committed suicide that his parents blamed on ChatGPT, characterizing its role as that of a “suicide coach.” There have been similar fears about inaccurate health information that could lead users to follow dangerous medical advice or delay seeking medical attention. These issues have led AI companies to more proactively address the information generated by their services.
But using regulation to require more accurate information – or even to block certain topics from discussion altogether – is far different than mandating that companies monitor what their users say and establish lower thresholds for reporting suspicious behaviour. If the standards for reporting are too low, there is the real risk that users could face police investigations or worse.
Moreover, given that internet intermediaries such as tech companies find themselves at the centre of virtually everything people write – whether text messages, e-mails, articles stored on cloud-based services, or exchanges with chatbots – these standards of disclosure would presumably apply to virtually all written expression.
The companies clearly recognize the need to monitor user activity and escalate where appropriate. The OpenAI case is notable because the company did not adopt a hands-off approach but instead identified a potential risk and escalated it to discussion amongst employees on what to do next. In hindsight, reporting would have been more beneficial than banning the account, but the company should be judged based on its processes and whether it properly adhered to them.
While the latest incident has sparked renewed discussion on the need for online harms legislation in Canada with B.C. Premier David Eby calling for national rules on when AI companies should contact police, the lesson isn’t that Canada needs to require more disclosure of user conduct or content to the authorities. Rather, it is that the current frameworks have so little transparency that Mr. Solomon needed a meeting with corporate executives to understand their safety policies and how they are administered.
Indeed, it is greater transparency that should be the starting point of any regulatory framework. Companies such as OpenAI should be required to fully disclose their policies on user safety, including the standards used to escalate matters beyond flagging content or banning users from the platform. If those standards are viewed as too strict, government should engage in developing uniform requirements for intermediaries that better strike the balance between safety and privacy. Further, the use of reporting processes should be made public with annual transparency reporting to advise the public, providing aggregated numbers on disclosures to government or law enforcement.
The link between AI and the Tumbler Ridge shooting places the safety-privacy balance for AI regulation squarely in the spotlight. Canada needs greater AI transparency and consistent standards for corporate action but opening the police-reporting floodgates by lowering standards or extending those requirements to any intermediary that touches user content would be a mistake.










