2023 US-Canada Summit by Eurasia Group https://flic.kr/p/2osjLzX CC BY 2.0

2023 US-Canada Summit by Eurasia Group https://flic.kr/p/2osjLzX CC BY 2.0

News

Why the Online Harms Act is the Wrong Way to Regulate AI Chatbots

In the wake of reports that AI Minister Evan Solomon may press AI companies such as OpenAI to more aggressively report potential safety risks identified in private chats to law enforcement, attention has quickly turned to the Online Harms Act as a potential regulatory solution. The Online Harms Act or Bill C-63, died on the order paper last year, but is expected to return in some form in the coming months. Given that the Act is tailor made to address online harms, it isn’t surprising that some would suggest that it could be expanded to cover AI chatbots.

Yet the law was deliberately designed to avoid doing what politicians want the AI companies to do as it expressly exempted private communications and proactive monitoring from its scope. Indeed, applying the Online Harms Act to AI chatbots would not simply extend existing online safety rules to a new technology. It would require dismantling core privacy safeguards which were added after the government’s earlier online harms proposal faced widespread criticism for encouraging platform monitoring and rapid reporting to law enforcement. In effect, proposals to use online harms to regulate AI chatbots risks reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.

The Online Harms Act was crafted to regulate social media platforms, not all digital services. Section 2 defines a social media service as a “website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content.‍” Regulated services under the bill were defined as social media services that reached a certain threshold of users. The legislative focus was therefore on large-scale dissemination and amplification, namely platforms where harmful content can rapidly reach broad audiences through sharing and recommendation systems.

None of this fits with an AI chatbot. Interactions with chatbots such as ChatGPT do not involve user-to-user communication or public dissemination. A prompt entered into a chatbot is typically visible only to the individual user and the provider. There is no audience exposure risk, the central concern animating the Online Harms Act framework.

In fact, the bill reinforced this limitation through an explicit privacy safeguard. Section 6(1) provides that the Act’s duties do not apply in respect of any private messaging feature of a regulated service. Section 6(2) defines private messaging as communications sent to a limited number of users selected by the sender rather than to a potentially unlimited audience. This exclusion reflects a clear policy boundary as the government chose to regulate publicly amplified harms while leaving interpersonal digital communications outside the regime. Chatbot interactions align far more closely with private messaging than social media publishing since they involve one-to-one exchanges rather than public distribution. Bringing chatbot prompts within the Online Harms Act would therefore require narrowing or effectively bypassing the statute’s privacy protections.

Moreover, Section 7(1) states that nothing in the legislation requires an operator to proactively search content communicated on the service in order to identify harmful content (subject to a narrow exception involving child sexual victimization materials). The current push to apply the Online Harms Act to AI chatbots moves in precisely the opposite direction. Identifying potentially dangerous behaviour from AI chatbot interactions would almost inevitably require analysis of prompts and conversational patterns within private exchanges. In practical terms, it would introduce monitoring into the very environments the Act was structured to avoid regulating.

Neither of these safeguards are there by accident. Both are cited in the Department of Justice’s Charter analysis to justify the bill’s compliance with the Charter of Rights of Freedoms. And both have echoes of the government’s 2021 Online Harms consultation, which sparked widespread criticism after it floated proactive monitoring requirements and mandatory reporting to law enforcement within tight timelines. Critics warned that requiring platforms to actively monitor user communications and rapidly report potentially unlawful content risked creating incentives for over-reporting and expanded surveillance of lawful expression. The consultation was widely viewed as blurring the line between addressing harmful public content and deputizing platforms as agents of law enforcement.

Applying the Online Harms Act to AI chatbot conversations now risks reopening the very issues policymakers previously sought to avoid. In fact, it is difficult to see the difference between something posted to an AI chatbot or similar content entered into a search query or included in text message or email correspondence. If proactive monitoring of searches, emails or texts is subject to privacy safeguards, so too should be AI chatbot engagement.

The Online Harms Act failed in large measure because it sought cover too much, layering in Criminal Code and Human Rights Act provisions alongside the platform liability elements. Expanding the bill to include AI chatbots runs the same risk. There is a role for AI chatbot regulation, but it isn’t an expanded Online Harms Act. Instead, the starting point should be specific transparency-focused legislation that places the emphasis on ensuring there is full disclosure of user safety policies and how they are implemented and enforced.

Leave a Reply

Your email address will not be published.

*

*