Blog
Slick Videos Won’t Save Lawful Access: Why The Government’s Bill C-22 Defence Avoids the Charter, Privacy and Security Concerns Raised By Critics
With opposition to Bill C-22, the lawful access bill, mounting, Public Safety Minister Gary Anandasangaree has turned to social media with a video defending the bill as one that “respects Canadian privacy and Charter rights.” The video signals that the government has noticed the growing public concern. But the case against the bill, which I argued in committee testimony last week and in a series of earlier posts, raises at least four issues on which the government has not engaged: mandated metadata retention (which is ignored in its Charter Statement), a lower threshold for access to subscriber information that hurts privacy, security risks now alarming Canada’s closest allies, and an oversight architecture the oversight body itself says is incomplete.
U.S. Congressional Leaders Warn Canadian Lawful Access Plans Harm U.S. National Security and Economic Interests
Just as Bill C-22, the Lawful Access Act, is under study at the House Standing Committee on Public Safety and National Security (I review my appearance yesterday in this post) U.S. Congressional leaders have written to Public Safety Minister Gary Anandasangaree warning that the bill threatens to harm “U.S. national security and economic interests by undermining trust in American technology and inviting reciprocal demands from other nations.” The message is clear: U.S. leaders are concerned that lawful access demands go so far as to compromise the privacy not only of Canadians, but of Americans too.
Why Social Media and AI Chatbot Bans for Kids Are Bad Policy: Making the Case at the Senate Social Affairs, Science and Tech Committee
The Standing Senate Committee on Social Affairs, Science and Technology is one of several committees in the House and Senate conducting hearings on artificial intelligence. I appeared before the committee yesterday (my fourth appearance on the issue in recent months), but rather than reiterate previous testimony on privacy, copyright, and transparency, I focused on the big issue of the moment: bans on social media and AI chatbots for children. The committee had been hearing from many supportive witnesses who emphasized the risk of harm associated with AI. Indeed, one Senator asked the panel before mine to raise their hands if they supported a ban, and virtually all hands went up. I was unsure about how my comments would be received, but I found the Senators open to debate on the issue. A video of my opening remarks, together with the transcript, is posted below. A future Law Bytes podcast episode will delve into the discussion that followed.
Government Has a Choice: Why an AI Chatbot Ban for Kids is an Even Worse Idea Than a Social Media Ban
The frenzy to ban kids from social media continues to grow with Culture Minister Marc Miller telling a House of Commons committee that the government has no choice but to act. Miller’s comments are consistent with the federal Liberal policy convention vote backing a minimum age of 16 and Manitoba Premier Wab Kinew announcing that his government will be the first in Canada to ban kids from both social media and AI chatbots. The problem, as I documented in detail last week, is that good intentions do not make for good policy. In this case, a social media ban is bad policy because it does not address the underlying problems with the platforms, evidence to date suggests it doesn’t work, and it creates its own harms. But the bad policy does not end there, as the possibility of extending that same framework to AI chatbots is now squarely on the table. This post examines the implications of a ban on kids’ use of AI chatbots, arguing that such an approach is even worse than a social media ban. To be clear, regulation of AI chatbots is needed, but a ban leaves the genuine concerns associated with AI chatbots largely untouched.











