Earlier this week, I appeared before the Standing Senate Committee on Transport and Communications as part of its study on AI regulation. This follows earlier appearances before the House of Commons Heritage and Industry committees on the same issue. The hearing led to robust exchanges with multiple Senators on the intersection of AI policy with issues such as privacy, copyright, online harms, and sovereignty. I plan to post clips from the hearing in a future Law Bytes podcast, but in the meantime, my opening statement provides a good sense of my views on AI regulation with respect to privacy, copyright, and the need for an AI Transparency Act. A video of the opening statement is embedded below, followed by the text.
Appearance before the Standing Senate Committee on Transport and Communications, April 21, 2026
Good morning. My name is Michael Geist. I’m a law professor at the University of Ottawa where I hold the Canada Research Chair in Internet and E-commerce Law. I appear in a personal capacity representing only my own views.
Thank you for the invitation. AI is one of the most consequential policy challenges we face. In my opening remarks, I want to focus on three critical issues: privacy, copyright, and the need for an AI Transparency Act.
First, privacy. Canadian private sector privacy law is widely recognized as badly out of date. Modernization would help establish much-needed safeguards for AI data, fix weak enforcement, and address data sovereignty concerns. But AI is reshaping the privacy discussion, and simply restarting past reform efforts is insufficient.
Getting this right from an AI perspective requires addressing both sides of the AI equation: what goes into AI models and what comes out. On the input side, there is a notable global shift toward more permissive treatment of personal information used for AI training. Japan, the UK, and the EU are softening rules, and Canada will undoubtedly face pressure to follow.
The output side has received less attention but may prove more consequential. Modern AI can combine harmless fragments of information and draw inferences that re-identify individuals from information that was never meant to be personal. AI’s real privacy threat isn’t what it learns. It’s what it figures out. Reform must treat the two sides differently: flexibility on inputs paired with innovative approaches to outputs, including inference auditing.
Privacy reform is where data sovereignty is won or lost. Domestic AI infrastructure may sound like sovereignty, but the servers could be in Gatineau and it wouldn’t matter if Canadian privacy law doesn’t apply or if weak enforcement lets extraterritorial laws like the U.S. CLOUD Act fill the gap.
Second, copyright. In the AI context, the application of copyright isn’t clear cut. Outputs rarely rise to the level of actual infringement. Inputs are the subject of numerous lawsuits, but few have to date resulted in liability, with courts suggesting that inclusion in large language models often qualifies as fair use or fair dealing. The market may well develop new licensing models. But regulating licensing or new restrictions on fair dealing, would render Canada a more difficult and costly country for AI. This presents two risks.
First, AI development is likely to shift outside the country, with less investment and fewer economic opportunities. Indeed, without a text-and-data mining exception, as is found in many other countries, the risk may already be here.
Second, we want to ensure Canadian culture and heritage are well represented in an AI world. But if Canada becomes an outlier with licensing requirements that make Canadian content more costly or harder to include, AI developers will simply exclude it. The result will be less Canada in the training data and less Canada in the outputs. We saw this pattern with news on social media: regulation intended to support Canadian sources produced fewer of them and more substitutable alternatives. More Canada in AI outputs requires more Canada in the training data and our policies should reflect that.
Third, an AI Transparency Act. The lack of transparency around AI systems has eroded public trust. The recent concerns about OpenAI and the Tumbler Ridge shooter are a case in point. It should not take a meeting with company executives for the Minister — or anyone else — to know about the company’s policies on banning user accounts or reporting conduct to police. Greater transparency should be the starting point of any regulatory framework. An AI Transparency Act should do three things.
First, ensure AI corporate policies on user safety are publicly accessible, including the standards for escalating beyond flagging content or banning users. Second, mandate transparency on which works are included in large language models so creators have the information they need to exercise choice and seek content removals on an opt-out basis if they wish. Third, require companies to publish annual transparency reports on government and law enforcement demands targeting users or content. This approach addresses real concerns without gutting privacy or locking in rules that may not fit a fast-moving landscape.
Canada has a genuine opportunity here. We have AI talent, growing public attention to governance, and cross-party interest in getting this right. The worst thing we could do is waste that opportunity on the wrong policies. I look forward to your questions.











