Justice Minister Arif Virani yesterday finally bowed to public pressure by agreeing to split Bill C-63, the Online Harms bill. The move brings to an end the ill-conceived attempt to wedge together Internet platform responsibility with Criminal Code provisions and the potential weaponization of the Canada Human Rights Act that had rightly sparked concerns from a wide range of groups. I wrote about the need to drop those provisions two days after the bill was introduced last February. By the time the fall had rolled around, it was hard to find anyone who supported the bill in its current form.
Post Tagged with: "ai"
Canadian Media Companies Target OpenAI in Copyright Lawsuit But Weak Claims Suggest Settlement the Real Goal
Canada’s largest media companies, including the Globe and Mail, Toronto Star, Postmedia, CBC, and Canadian Press, came together last week to file a copyright infringement lawsuit against OpenAI, the owners of ChatGPT. The lawsuit is the first high profile Canadian claim lodged against the enormously popular AI service, though there have been similar suits filed elsewhere, notably including a New York Times lawsuit launched last year. While the lawsuit itself isn’t a huge surprise, the relatively weak, narrow scope of the claims discussed below are. Unlike comparable lawsuits, the Canadian media companies claim is largely limited to data scraping, which may be the weakest copyright claim. Moreover, the companies say they have no actual knowledge of when, where, or how their data was accessed, an acknowledgement that doesn’t inspire confidence when there is evidence available if you know where to look.
So why file this lawsuit? The claim is sprinkled with the most obvious reason: the Canadian media companies want a settlement that involves OpenAI paying licence fees for the inclusion of their content in its large language models and the lawsuit is designed to kickstart negotiations. The companies aren’t hiding the ball as there are repeated references along the lines of “at all times, Open AI was and is well aware of its obligations to obtain a valid licence to use the Works. It has already entered into licensing agreements with several content creators, including other news media organizations.” The takeaway is that Canadian media companies want to licence their stuff too, much like the licensing agreements with global media companies such as News Corp, Financial Times, Hearst, Axel Springer, Le Monde, and the Associated Press.
The Law Bytes Podcast, Episode 203: Andrew Clement on Calls to Separate Privacy Reform and Artificial Intelligence Regulation in Bill C-27
Bill C-27, Canada’s proposed privacy reform and AI regulation bill, continues to slowly work its way through the committee process at the House of Commons with the clause-by-clause review of the AI portion of the bill still weeks or even months away. Recently a group of nearly 60 leading civil society organizations, corporations, experts and academics released an open letter calling on the government to separate the bill into two.
Andrew Clement has been an important voice in that group as he tracked not only the committee hearings but also dug into the consultation process surrounding the bill. Clement is a Professor Emeritus in the Faculty of Information at the University of Toronto, where he coordinates the Information Policy Research Program and co-founded the Identity Privacy and Security Institute (IPSI). He joins the Law Bytes podcast to talk about AI regulation in Canada, concerns with the bill, and offers insights into the legislative and consultative process.
AI Spending is Not an AI Strategy: Why the Government’s Artificial Intelligence Plan Avoids the Hard Governance Questions
The government announced plans over the weekend to spend billions of dollars to support artificial intelligence. Billed as “securing Canada’s AI Advantage”, the plan includes promises to spend $2 billion on an AI Compute Access Fund and a Canadian AI Sovereign Compute Strategy that is focused on developing domestic computing infrastructure. In addition, there is $200 million for AI startups, $100 million for AI adoption, $50 million for skills training (particularly those in the creative sector), $50 million for an AI Safety Institute, and $5.1 million to support the Office of the AI and Data Commissioner, which would be created by Bill C-27. While the plan received unsurprising applause from AI institutes that have been lobbying for the money, I have my doubts. There is unquestionably a need to address AI policy, but this approach appears to paper over hard questions about AI governance and regulation. The money may be useful – though given the massive private sector investment in the space right now a better case for public money is needed – but tossing millions at each issue is not the equivalent of grappling with AI safety, copyright or regulatory challenges.
The Law Bytes Podcast, Episode 191: Luca Bertuzzi on the Making of the EU Artificial Intelligence Act
European countries reached agreement late last week on a landmark legislative package to regulate artificial intelligence. AI regulation has emerged as a key issue over the past year as the explosive growth of ChatGPT and other generative AI services have sparked legislation, lawsuits and national consultations. The EU AI Act is heralded as the first of its kind and as a model for Canadian AI rules. Luca Bertuzzi is a Brussels-based tech journalist who was widely regarded as the leading source of information and analysis about the unfolding negotiations involving the EU AI Act. He joins the Law Bytes podcast to explain the EU process, the ongoing opposition by some countries, and the future steps for AI regulation in Europe.