The Jewish holiday of Purim over the weekend sparked the usually array of political tweets featuring some odd interpretations of the meaning of the holiday and expressing varying degrees of support for the Jewish community. But coming off one of the worst weeks in memory – cancelled Jewish events due to security concerns, antisemitism in the mainstream media, deeply troubling comments on the floor of the House of Commons, and the marginalization of some Jewish MPs in government – the time for generic statements of support does not cut it. The Globe and Mail has noted the “dangerous slide into antisemitism” and called for a House motion unequivocally condemning antisemitism. This post provides further context to that piece, arguing that such a motion is necessary but insufficient since it is leadership and real action from our politicians, university presidents, and community groups that is desperately needed.
News
Better Laws, Not Bans: Why a TikTok Ban is a Bad Idea
New legislation making its way through the U.S. Congress has placed a TikTok ban back on the public agenda. The app is already prohibited on government devices in Canada, the government has quietly conducted a national security review, and there are new calls to ban it altogether from the Canadian market. While it might be tempting for some politicians to jump on the bandwagon, a ban would be a mistake. There are legitimate concerns with social media companies, but there simply hasn’t been convincing evidence that TikTok currently raises a national security threat nor that it poses a greater risk than any other social media service. The furor really seems to be a case of economic nationalism – a desire to deny a popular Chinese service access to the U.S. market – rather than a genuine case that TikTok poses a unique privacy and security threat. Taken at face value, however, the case against TikTok comes down to a simple concern: its owner, ByteDance, is a Chinese company that could theoretically be required to disclose user information to the Chinese government or compelled to act on its behalf. The proposed U.S. law therefore would require that TikTok be sold within six months or face a ban.
While the concerns associated with TikTok given its Chinese connection and popularity with younger demographics are well known, the privacy and security case against it is very weak.
Government Gaslighting Again?: Unpacking the Uncomfortable Reality of the Online Harms Act
The Online Harms Act was only introduced two weeks ago, but already it appears that the government is ready to run back the same playbook of gaslighting and denials that plagued Bills C-11 and C-18. Those bills, which addressed Internet streaming and news, faced widespread criticism over potential regulation of user content and the prospect of blocked news links on major Internet platforms. Rather than engage in a policy process that took the criticism seriously, the government ignored digital creators (including disrespecting indigenous creators) and dismissed the risks of Bill C-18 as a bluff. The results of that strategy are well-known: Bill C-11 required a policy direction fix and is mired in a years-long regulatory process at the CRTC and news links have been blocked for months on Meta as the list of Canadian media bankruptcies and closures mount.
Bill C-63, the Online Harms Act, offered the chance for a fresh start given that the government seemed to accept the sharp criticism of its first proposal, engaging in a more open consultative process in response. As I noted when the bill was first tabled, the core of the legislation addressing the responsibility of Internet platforms was indeed much improved. Yet it was immediately obvious there were red flags, particularly with respect to the Digital Safety Commission charged with enforcing the law and with the inclusion of Criminal Code and Human Rights Act provisions with overbroad penalties and the potential to weaponize speech complaints. The hope – based on the more collaborative approach used to develop the law – was that there would be a “genuine welcoming of constructive criticism rather than the discouraging, hostile processes of recent years.” Two weeks in that hope is rapidly disappearing.
Taking Action Against Antisemitic Hate: When Content Moderation, Self-Regulation, and Legislation Fail
The explosive growth of antisemitism in Canada since October 7th is well documented with shooting at schools, the need for a regular police presence at synagogues and community centres, arrests on terrorism offences, and protests targeting Jewish owned businesses and communities. So in that context, some antisemitic graffiti at a bus stop in Toronto over the weekend might have been just one more incident to add to the list that now runs into the hundreds. Yet I found the image of “No Service For Jew Bastards” particularly chilling, evoking memories of the holocaust and of similar hateful messages that have frequently targeted minority communities over the years. I proceeded to post a tweet and a LinkedIn post with the photo and a caption:
Why the Criminal Code and Human Rights Act Provisions Should Be Removed from the Online Harms Act
Having a spent virtually the entire day yesterday talking with media and colleagues about Bill C-63, one thing has become increasingly clear: the Criminal Code and Human Rights Act provisions found in the Online Harms Act should be removed. In my initial post on the bill, I identified the provisions as one of three red flags, warning that they “feature penalties that go as high as life in prison and open the door to a tidal wave of hate speech related complaints.” There is no obvious need or rationale for penalties of life in prison for offences motivated by hatred, nor the need to weaponize human rights complaints by reviving Human Rights Act provisions on communication of hate speech. As more Canadians review the bill, there is a real risk that these provisions will overwhelm the Online Harms Act and become a primary area of focus despite not being central to the law’s core objective of mitigating harms on Internet platforms.