Canadian Heritage Minister Pablo Rodriguez released a “What We Heard Report” on the government’s consultation on online harms earlier today. To the government’s credit, the report is remarkably candid as it does not shy away from the near-universal criticism that its plans sparked, including concerns related to freedom of expression, privacy rights, the impact of the proposal on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms. The report provides a play-by-play of these concerns, leaving little doubt that a major reset is required. The government telegraphed a change in approach with the Rodriguez mandate letter, which explicitly stated that the online harms legislation “should be reflective of the feedback received during the recent consultations.”
How the government reached the point of releasing such a deeply flawed plan is worthy of further examination (not to mention that former Canadian Heritage Minister Steven Guilbeault repeatedly stated that he planned to introduce legislation without further consultation), but this post seeks instead to summarize some of the key findings. The consultation was widely viewed as flawed from the outset as it read more like a roadmap than a genuine attempt to garner feedback and was held during a national election campaign when few were paying attention. The government acknowledges the criticism in its report, noting “there was a predominantly critical perspective from civil society, academia and industry on both the process of the consultation and the design and substance of the framework itself.” In fact, on the substance, the government admits “only a small number of submissions from those stakeholders were supportive, or mostly supportive, of the framework as a whole.”
If that was not bad enough, Canadian Heritage then refused to disclose the submissions it received, a position it maintains to this day, as it argues that it allows businesses to submit information in confidence and victims groups to submit personal experiences. I have posted all the public submissions I can find alongside two posts that seek to summarize the feedback (here and here). In all, the government says it received 422 unique responses: 350 from individuals, 39 from civil society, 19 from industry, 13 from academics, and 2 from government or government-adjacent organizations (my submission can be found here).
The government’s “what we heard document” rightly notes that there is support for doing something, but that the submissions “identified a number of overarching concerns including concerns related to freedom of expression, privacy rights, the impact of the proposal on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms more generally.”
In particular, the concerns focused on who would be regulated, what content moderation obligations might be established (particularly pro-active monitoring and 24 hour takedowns), the independence and oversight of new regulatory bodies, what content would be captured by the rules, the enforcement tools (particularly website blocking), and the mandatory reporting of content to law enforcement. As my post on the submissions notes (as well as my personal submission), these concerns were repeatedly raised by groups and experts from across the political and policy spectrum.
The “what we heard” report does a good job of summarizing the feedback and does not shy away from owning up to the criticism. For example, on content issues it notes “most respondents criticized the regime for introducing types of content that are too diverse to be treated within the same regime.” There were also concerns that the “one-size-fits-all” approach would not adequately address the nuances of the different types of content. There was also widespread concern of expanding the regulatory framework to capture so-called awful but lawful speech with an emphasis to limit any content moderation to illegal speech.
The content moderation issues attracted considerable attention as “most stakeholders flagged these obligations as extremely problematic. The proactive monitoring obligation was considered by many as being inconsistent with the right to privacy and likely to amount to pre-publication censorship..multiple respondents also considered the obligation to be a human rights infringement.” Further, “a significant majority of respondents asserted that the 24 hour requirement [to takedown content] was systematically flawed, because it would incentivize platforms to be over-vigilant and over-remove content, simply to avoid non-compliance with the removal window. Nearly all respondents agreed that 24 hours would not be sufficient time to thoughtfully respond to flagged content, especially when that content requires a contextual analysis.”
Concerns about the use of artificial intelligence also figured prominently in the responses: “the majority of respondents were of the view that both the 24-hour inaccessibility requirement and the proactive monitoring obligations would force platforms to make problematic use of AI tools to fulfill their duties.” These concerns were raised by the very groups the policy was designed to help. For example, “victim advocacy groups explained that the use of algorithms for content moderation decisions would lead to discriminatory censorship of content produced by certain marginalized communities, in some contexts…multiple respondents emphasized that the proposed approach to content moderation would likely hurt certain marginalized communities.”
On the enforcement front, responses raised questions about the new regulators, including staffing, resources, and oversight. Their powers were even bigger issues. Website blocking gets particular mention, since “most respondents questioned whether the blocking provision was necessary, effective, or proportionate…Multiple respondents criticized the proposal for allowing the blocking of entire platforms, advocating instead for a more targeted and human rights compliant proposal of targeting specific webpages. A few advocates for sex workers explained that the overbreadth of the power was particularly worrisome to them as it would enable the censorship of sites that are crucial for sex workers’ safety.”
Privacy concerns also figured prominently in the responses. The report notes that “many were critical of the proposal requiring that platforms report information on users to law enforcement and national security agencies without appropriate safeguards (e.g., judicial oversight or notification of affected individuals). Stakeholders explained that the requirements would pose a significant risk to individuals’ right to privacy.” Further, mandatory reporting requirements were viewed as “likely to disproportionately impact certain marginalized communities.”
The report cites numerous other concerns as well as identifying issues where there was support. These include ensuring that the rules apply to all major platforms, that private and encrypted communication be excluded, that there be easy-to-use flagging mechanisms for content, greater platform transparency and accountability, effective administration, protection from real-world violence, and appropriate enforcement tools for non-compliance.
As for what comes next, Rodriguez now says that he plans to convene a new expert panel to develop new recommendations for legislative reform. That is a good start but the panel’s starting point should surely be the consultation submissions. They provide not only criticism of the government’s initial plan, but also feature a genuine attempt to offer constructive reforms that better balance the widely-held desire to address online harms with the need to safeguard freedom of expression, privacy, and fundamental rights and freedoms.