The government’s AI consultation concluded at the end of October with expectations that a strategy will emerge before the end of the year. I participated in the consultation with a brief submission and in an appearance as a witness before the Standing Committee on Canadian Heritage for its study on the effectiveness of technological advances in artificial intelligence on the creative sector. That study touched on many of the same issues as the AI consult with robust discussion on transparency, regulation, and navigating potentially conflicting policy objectives. This week’s Law Bytes podcast offer up a taste of both with the key issues raised in the submission and clips from the committee appearance including my opening statement and exchanges with multiple MPs.
Post Tagged with: "artificial intelligence"
We Need More Canada in the Training Data: My Appearance Before the Standing Committee on Canadian Heritage on AI and the Creative Sector
The government, led by AI Minister Evan Solomon, is currently conducting a short consultation on AI regulation that has attracted criticism for its short time frame. At the same time however, the Standing Committee on Canadian Heritage has been working through a study on AI and the creative sector that may be more limited in scope, but has featured a broader range of perspectives. I had the opportunity to appear before the committee yesterday where I lamented that too often debates on new technology is framed “as a threat, emphasizes cross-industry subsidies, and misses the opportunities new technology presents. We therefore need risk analysis that rejects entrenching the status quo and instead assesses the risks of both the technology and the policy response. I’ll post the full discussion (which ventured into AI transparency, copyright, the news sector, and much more) in a future Law Bytes podcast episode. In the meantime, my opening statement is embedded and posted below.
The Law Bytes Podcast, Episode 233: Abdi Aidid on AI, the Law and the Future of Legal Practice
The discussion on the intersection between AI and the law, especially with respect to legal services continues to grow. From lawyers that mistakenly rely on AI generated cases to AI support for due diligence and comment review, the role of AI within legal practice has emerged as a critical issue. Professor Abdi Aidid is a law professor at the University of Toronto, where he has focused on these issues for many years, well before the public’s attention was captured by generative AI services like Chat GPT. Professor Aidid is currently a Visiting Associate Professor of Law at Yale Law School, he was a VP with BlueJ Legal, an early AI legal startup, and is the co-author of the The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better. He joins the Law Bytes podcast to discuss all things AI and the law, including what these technologies may mean for legal practice.
Solomon’s Choice: Charting the Future of AI Policy in Canada
The decision to create a Minister for Artificial Intelligence sends an unmistakable signal that the Carney government recognizes the need to prioritize AI as a core part of its economic strategy. My Globe and Mail op-ed notes that while few doubt the importance of AI, what the federal government should do about it is far less certain. The Trudeau government emphasized both government handouts and regulation, with billions in AI spending promises on the one hand and ill-considered legislation that was out of step with global trends on the other. The result was a mish-mash of incoherent policies that left the AI sector confused, civil society frustrated and Canada at risk of being left behind.
Elevating AI to a full ministerial position suggests Prime Minister Mark Carney wants to fix the status quo, but in some ways the new office looks like an impossible job dressed up in ambition. Evan Solomon, the minister, steps into a role full of symbolism but operationally murky. Mr. Solomon may well find that cutting more cheques or introducing regulations won’t solve the issue.
Canadian Media Companies Target OpenAI in Copyright Lawsuit But Weak Claims Suggest Settlement the Real Goal
Canada’s largest media companies, including the Globe and Mail, Toronto Star, Postmedia, CBC, and Canadian Press, came together last week to file a copyright infringement lawsuit against OpenAI, the owners of ChatGPT. The lawsuit is the first high profile Canadian claim lodged against the enormously popular AI service, though there have been similar suits filed elsewhere, notably including a New York Times lawsuit launched last year. While the lawsuit itself isn’t a huge surprise, the relatively weak, narrow scope of the claims discussed below are. Unlike comparable lawsuits, the Canadian media companies claim is largely limited to data scraping, which may be the weakest copyright claim. Moreover, the companies say they have no actual knowledge of when, where, or how their data was accessed, an acknowledgement that doesn’t inspire confidence when there is evidence available if you know where to look.
So why file this lawsuit? The claim is sprinkled with the most obvious reason: the Canadian media companies want a settlement that involves OpenAI paying licence fees for the inclusion of their content in its large language models and the lawsuit is designed to kickstart negotiations. The companies aren’t hiding the ball as there are repeated references along the lines of “at all times, Open AI was and is well aware of its obligations to obtain a valid licence to use the Works. It has already entered into licensing agreements with several content creators, including other news media organizations.” The takeaway is that Canadian media companies want to licence their stuff too, much like the licensing agreements with global media companies such as News Corp, Financial Times, Hearst, Axel Springer, Le Monde, and the Associated Press.











