The Federal Court of Canada last week dismissed the Privacy Commissioner of Canada’s complaint against Facebook stemming from alleged privacy violations involving Cambridge Analytica. The Privacy Commissioner ruled against Facebook in 2019, but Facebook disagreed with the findings and took the matter to court. Last week, a court sided with the social media giant, concluding that the Privacy Commissioner did not provide sufficient evidence that Facebook failed to obtain meaningful consent when sharing information with third-party applications and rejecting a claim that Facebook did not adequately safeguard user information. The Cambridge Analytica case sparked investigations and complaints worldwide, leading to a $5 billion penalty in the U.S., significant settlements of private lawsuits, fines in the UK, and extensive new rules in the European Union. Yet in Canada, the case against the company has been dismissed, raising troubling questions about how it was handled and the adequacy of Canadian privacy law.
Archive for April, 2023
Canada’s Privacy Failure: Federal Court Dismisses Privacy Commissioner’s Complaint Against Facebook Over Cambridge Analytica
The Law Bytes Podcast, Episode 163: Cohere AI CEO Aidan Gomez on the Emerging Legal and Regulatory Challenges for Artificial Intelligence
ChatGPT burst onto the public scene late last year, giving artificial intelligence its “aha moment” for many people. AI is now seemingly everywhere, attracting enormous attention and excitement alongside concerns, legal threats and talk of regulation. The potential of AI is evident to just about everyone, but the challenges associated with bias, copyright, privacy, misinformation and more can’t be ignored. Cohere AI is a Canadian-based AI firm that is widely viewed as one of Canada’s AI stars for its large language models that enable companies of all sizes to integrate AI technologies. Aidan Gomez, who worked on the “T” in ChatGPT, is the co-founder and CEO of Cohere AI. He joins the Law Bytes podcast to talk about AI and his views on the myriad of emerging legal and regulatory issues.
Bill C-11 Estimates Revealed: Internal Government Documents Show No Impact on Net Employment, Admit Streamers Already Invest Millions in “Unofficial Cancon”
The government’s support for Bill C-11 has often been framed on economic terms with Canadian Heritage Minister Pablo Rodriguez arguing that the bill will “create good jobs for Canadians in the cultural sector”. I’ve long maintained the government’s claims that the bill will generate billions of dollars in new money was massively exaggerated and that a far more likely scenario would be that the bill would simply lead to a reshuffling of existing expenditures.
Using the Access to Information Act, I have now obtained a copy of the government’s internal estimates for the economic and production impact of Bill C-11 (methodology, memorandum, PPT), which confirm many of my suspicions. While the government is pinning its hopes on massive spending from Internet streamers such as Netflix, it admits that even if the bill did not pass it would not affect net new employment in the sector. Moreover, internally the government recognizes the claim that Netflix and foreign streamers don’t contribute to Canadian content is false, as it has identifies a new category of “unofficial Cancon” which would qualify as Cancon under every measure but for the fact that it is owned by companies like Netflix and Disney. And as for the payments from social media companies that the government insists are so essential that it has fought for years to include user content regulation in the bill? The estimated economic benefit represents just one percent of its total projection for Bill C-11 with pure guesswork about what percentage of content on the platforms might require contributions.
The Canadian Heritage Online Harms Credibility Gap, Part Two: Filtering Out Critics From Participating in Anti-Hate Consultation Survey
The government’s online harms bill, led by Canadian Heritage, is likely to be introduced in the coming weeks. My series on why the department faces a significant credibility gap on the issue opened with a look at its misleading and secretive approach to the 2021 online harms consultation, including its decision to only disclose public submissions when compelled to do so by law and releasing a misleading “What We Heard” report that omitted crucial information. Today’s post focuses on another Canadian Heritage consultation which occurred months later on proposed anti-hate plans. As the National Post reported earlier this year, after the consultation launched, officials became alarmed when responses criticizing the plan and questioning government priorities began to emerge. The solution? The department remarkably decided to filter out the critics from participating in the consultation by adding a new question that short-circuited it for anyone who responded that they did not think anti-hate measures should be a top government priority.