The discussion on the intersection between AI and the law, especially with respect to legal services continues to grow. From lawyers that mistakenly rely on AI generated cases to AI support for due diligence and comment review, the role of AI within legal practice has emerged as a critical issue. Professor Abdi Aidid is a law professor at the University of Toronto, where he has focused on these issues for many years, well before the public’s attention was captured by generative AI services like Chat GPT. Professor Aidid is currently a Visiting Associate Professor of Law at Yale Law School, he was a VP with BlueJ Legal, an early AI legal startup, and is the co-author of the The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better. He joins the Law Bytes podcast to discuss all things AI and the law, including what these technologies may mean for legal practice.
Post Tagged with: "artificial intelligence"
Solomon’s Choice: Charting the Future of AI Policy in Canada
The decision to create a Minister for Artificial Intelligence sends an unmistakable signal that the Carney government recognizes the need to prioritize AI as a core part of its economic strategy. My Globe and Mail op-ed notes that while few doubt the importance of AI, what the federal government should do about it is far less certain. The Trudeau government emphasized both government handouts and regulation, with billions in AI spending promises on the one hand and ill-considered legislation that was out of step with global trends on the other. The result was a mish-mash of incoherent policies that left the AI sector confused, civil society frustrated and Canada at risk of being left behind.
Elevating AI to a full ministerial position suggests Prime Minister Mark Carney wants to fix the status quo, but in some ways the new office looks like an impossible job dressed up in ambition. Evan Solomon, the minister, steps into a role full of symbolism but operationally murky. Mr. Solomon may well find that cutting more cheques or introducing regulations won’t solve the issue.
Canadian Media Companies Target OpenAI in Copyright Lawsuit But Weak Claims Suggest Settlement the Real Goal
Canada’s largest media companies, including the Globe and Mail, Toronto Star, Postmedia, CBC, and Canadian Press, came together last week to file a copyright infringement lawsuit against OpenAI, the owners of ChatGPT. The lawsuit is the first high profile Canadian claim lodged against the enormously popular AI service, though there have been similar suits filed elsewhere, notably including a New York Times lawsuit launched last year. While the lawsuit itself isn’t a huge surprise, the relatively weak, narrow scope of the claims discussed below are. Unlike comparable lawsuits, the Canadian media companies claim is largely limited to data scraping, which may be the weakest copyright claim. Moreover, the companies say they have no actual knowledge of when, where, or how their data was accessed, an acknowledgement that doesn’t inspire confidence when there is evidence available if you know where to look.
So why file this lawsuit? The claim is sprinkled with the most obvious reason: the Canadian media companies want a settlement that involves OpenAI paying licence fees for the inclusion of their content in its large language models and the lawsuit is designed to kickstart negotiations. The companies aren’t hiding the ball as there are repeated references along the lines of “at all times, Open AI was and is well aware of its obligations to obtain a valid licence to use the Works. It has already entered into licensing agreements with several content creators, including other news media organizations.” The takeaway is that Canadian media companies want to licence their stuff too, much like the licensing agreements with global media companies such as News Corp, Financial Times, Hearst, Axel Springer, Le Monde, and the Associated Press.
The Law Bytes Podcast, Episode 203: Andrew Clement on Calls to Separate Privacy Reform and Artificial Intelligence Regulation in Bill C-27
Bill C-27, Canada’s proposed privacy reform and AI regulation bill, continues to slowly work its way through the committee process at the House of Commons with the clause-by-clause review of the AI portion of the bill still weeks or even months away. Recently a group of nearly 60 leading civil society organizations, corporations, experts and academics released an open letter calling on the government to separate the bill into two.
Andrew Clement has been an important voice in that group as he tracked not only the committee hearings but also dug into the consultation process surrounding the bill. Clement is a Professor Emeritus in the Faculty of Information at the University of Toronto, where he coordinates the Information Policy Research Program and co-founded the Identity Privacy and Security Institute (IPSI). He joins the Law Bytes podcast to talk about AI regulation in Canada, concerns with the bill, and offers insights into the legislative and consultative process.
AI Spending is Not an AI Strategy: Why the Government’s Artificial Intelligence Plan Avoids the Hard Governance Questions
The government announced plans over the weekend to spend billions of dollars to support artificial intelligence. Billed as “securing Canada’s AI Advantage”, the plan includes promises to spend $2 billion on an AI Compute Access Fund and a Canadian AI Sovereign Compute Strategy that is focused on developing domestic computing infrastructure. In addition, there is $200 million for AI startups, $100 million for AI adoption, $50 million for skills training (particularly those in the creative sector), $50 million for an AI Safety Institute, and $5.1 million to support the Office of the AI and Data Commissioner, which would be created by Bill C-27. While the plan received unsurprising applause from AI institutes that have been lobbying for the money, I have my doubts. There is unquestionably a need to address AI policy, but this approach appears to paper over hard questions about AI governance and regulation. The money may be useful – though given the massive private sector investment in the space right now a better case for public money is needed – but tossing millions at each issue is not the equivalent of grappling with AI safety, copyright or regulatory challenges.