In the wake of reports that AI Minister Evan Solomon may press AI companies such as OpenAI to more aggressively report potential safety risks identified in private chats to law enforcement, attention has quickly turned to the Online Harms Act as a potential regulatory solution. The Online Harms Act or Bill C-63, died on the order paper last year, but is expected to return in some form in the coming months. Given that the Act is tailor made to address online harms, it isn’t surprising that some would suggest that it could be expanded to cover AI chatbots.
Yet the law was deliberately designed to avoid doing what politicians want the AI companies to do as it expressly exempted private communications and proactive monitoring from its scope. Indeed, applying the Online Harms Act to AI chatbots would not simply extend existing online safety rules to a new technology. It would require dismantling core privacy safeguards which were added after the government’s earlier online harms proposal faced widespread criticism for encouraging platform monitoring and rapid reporting to law enforcement. In effect, proposals to use online harms to regulate AI chatbots risks reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.







