Datacenter informatique de l'Ecole Polytechnique by Crédit photographique : © École polytechnique - J.Barande CC BY-SA 2.0 https://flic.kr/p/SA7f3L

Datacenter informatique de l'Ecole Polytechnique by Crédit photographique : © École polytechnique - J.Barande CC BY-SA 2.0 https://flic.kr/p/SA7f3L

News

Setting Canada’s AI Policy Priorities: My Appearance Before the Standing Committee on Industry, Science and Technology

The Standing Committee on Industry, Science and Technology is one of several House and Senate committees currently grappling with legal, regulatory and policy challenges and opportunities presented by AI. I appeared before the committee yesterday alongside Yoshua Bengio and Colin Bennett. Bengio unsurprisingly garnered the lion’s share of the questions, but the committee did give me the chance to highlight my thoughts on policy priorities and to address a few questions. I plan to post some reflections on the policy tensions in the coming days. In the meantime, the video and text of my opening statement are posted below.

Appearance before the House of Common Standing Committee on Industry, Science and Technology March 23, 2026

Good afternoon. My name is Michael Geist. I’m a law professor at the University of Ottawa where I hold the Canada Research Chair in Internet and E-commerce Law. I appear in a personal capacity representing only my own views.

I think we all recognize that we are at a moment when there is mounting pressure to do something quickly on AI regulation. That pressure is understandable but risky. I submit that we can’t simply fall back on “doing something.” The goal must be well-considered legal and regulatory frameworks that balance facilitating innovation with safeguards against potential risks and harms. And I have concerns that our initial efforts to find that balance have led to a haphazard amalgam of proposals that risk doing more harm than good. Let me provide four quick examples of where I have concerns, then shift to three recommendations.

First, Bill C-27, the former privacy and AI bill, always felt like a rushed response to that pressure “to do something” on AI. It largely mirrored the EU approach that has failed to find broad global support. Reviving it under a new name would repeat the same mistake and potentially undermine our AI competitiveness. The risk-based analysis may have a role to play in future regulations, but even some European countries, such as France, have slowly backed away from EU AI Act.

Second, the recent push to add AI chatbots to online harms legislation is similarly ill-conceived. Applying it would not simply extend those online safety rules to a new technology beyond the original social media focus. The Online Harms Act explicitly exempted private messaging from the regulatory regime and it did not require services to engage in proactive monitoring. Extending the Act to AI chatbots would require gutting the very privacy protections the government added after its earlier proposals were widely criticized.

Third, calls for copyright reform to address the use of works in large language models are premature. In fact, we should consider adding a text-and-data mining exception to keep us competitive. Many copyright cases are working their way through courts right now, leading to legal guidance and market deals. Legislating too quickly risks locking in rules that don’t match the legal and market landscape.

Fourth, the emphasis on data or digital sovereignty typically presents Canadian infrastructure as a solution to sovereignty concerns. Yet the real issue is whether Canadian laws apply to Canadian data, regardless of location. The answer is they often don’t. The push for domestic AI infrastructure sounds like sovereignty, but if Canadian privacy laws don’t apply to how Canadian data is used, the servers could be in Gatineau and it wouldn’t matter.

So what to prioritize?

First, prioritize passing modernized privacy and data governance laws. There is consensus that the current law is badly out of date. Modernized privacy law would help establish much-needed safeguards for the use of AI data, fix weak privacy enforcement, and go a long way toward addressing data sovereignty concerns.

Second, introduce and pass an AI Transparency Act. The lack of transparency around AI systems is directly correlated to diminished public trust. The recent concerns about OpenAI and the Tumbler Ridge shooter are a case in point. It should not take a meeting with company executives for the Minister – or anyone else – to know about the company’s policies on banning user accounts or reporting conduct to the police. An AI Transparency Act should do three things: (1) ensure AI corporate policies are publicly accessible, (2) mandate transparency on which works are included in large language models to give creators the information they need to potentially seek content removals, and (3) require transparency reports on government and law enforcement efforts targeting users or content removals.

Third, as Professor Scassa noted to this committee, there are already many disparate guidelines and guidance on the use of AI. Existing laws also apply to AI as they do in other contexts. We need to reduce the rhetoric, avoid panic-driven policies, and provide Canadians and businesses with a clearer sense of what has been done and how the strategy fits together. That includes maintaining an emphasis on facilitating AI development by making datasets available, supporting training, and fostering private investment. And it should also include acting on consultations based on what the government actually hears from stakeholders, not on what it would like to hear. The recent reports on the expert and the public response to the AI 30-day sprint consultation did not fully reflect the responses.

Canada has a genuine opportunity here. We have AI talent, growing public attention to governance, and cross-party interest in getting this right. The worst thing we could do is waste that opportunity on the wrong legislation.

Leave a Reply

Your email address will not be published.

*

*