When the intersection of law and technology presents seemingly intractable new challenges, policy makers often bet on technology itself to solve the problem. Whether countering copyright infringement with digital locks, limiting access to unregulated services with website blocking, or deploying artificial intelligence to facilitate content moderation, there is a recurring hope the answer to the policy dilemma lies in better technology. While technology frequently does play a role, experience suggests that the reality is far more complicated as new technologies also create new risks and bring unforeseen consequences. So too with the emphasis on age verification technologies as a magical solution to limiting under-age access to adult content online. These technologies offer some promise, but the significant privacy and accuracy risks that could inhibit freedom of expression are too great to ignore.
The Hub runs a debate today on the mandated use of age verification technologies. I argue against it in a slightly shorter version of this post. Daniel Zekveld of the Association for Reformed Political Action (ARPA) Canada makes the case for it in this post.
The Canadian debate over age verification technologies – which has now expanded to include both age verification and age estimation systems – requires an assessment of both the proposed legislative frameworks and the technologies themselves. The last Parliament featured debate over several contentious Internet-related bills, notably streaming and news laws (Bills C-11 and C-18), online harms (Bill C-63) and Internet age verification and website blocking (Bill S-210). Bill S-210 fell below the radar screen for many months as it started in the Senate and received only cursory review in the House of Commons. The bill faced only a final vote in the House but it died with the election call. Once Parliament resumed, the bill’s sponsor, Senator Julie Miville-Dechêne, wasted no time in bringing it back as Bill S-209.
The bill would create an offence for any organization making available pornographic material to anyone under the age of 18 for commercial purposes. The penalty for doing so is $250,000 for the first offence and up to $500,000 for any subsequent offences. Organizations can rely on three potential defences:
- The organization instituted a government-approved “prescribed age-verification or age estimation method” to limit access. There is a major global business of vendors that sell these technologies and who are vocal proponents of this kind of legislation.
- The organization can make the case that there is “legitimate purpose related to science, medicine, education or the arts.”
- The organization took steps required to limit access after having received a notification from the enforcement agency (likely the CRTC).
Note that Bill S-209 has expanded the scope of available technologies for implementation: while S-210 only included age verification, S-209 adds age estimation technologies. Age estimation may benefit from limiting the amount of data that needs to be collected from an individual, but it also suffers from inaccuracies. For example, using estimation to distinguish between a 17 and 18 year old is difficult for both humans and computers, yet the law depends upon it. Given the standard for highly effective technologies, age estimation technologies may not receive government approvals, leaving only age verification in place.
The government would determine through regulation what constitutes valid age verification or age estimation technologies. In doing so, the bill says it must ensure that the method:
(a) is highly effective;
(b) is operated by a third-party organization that deals at arm’s length from any organization making pornographic material available on the Internet for commercial purposes;
(c) maintains user privacy and protects user personal information;
(d) collects and uses personal information solely for age-verification or age-estimation purposes, except to the extent required by law;
(e) limits the collection of personal information to what is strictly necessary for the age verification or age estimation;
(f) destroys any personal information collected for age-verification or age-estimation purposes once the verification or estimation is completed; and
(g) generally complies with best practices in the fields of age verification and age estimation, as well as privacy protection.
Bill S-209 is an improvement over its predecessor as it seeks to exclude search and other incidental distribution, adopts a new standalone definition for pornographic materials, and sets a higher standard for the technology itself. Yet many concerns remain: the bill still envisions court ordered website blocking, including blocking access to lawful content by those entitled to access it. In fact, the bill expressly states blocking may “have the effect of preventing persons in Canada from being able to access material other than pornographic material made available by the organization.” Orders that knowingly block lawful content is certain to raise Charter of Rights challenges.
From a technological perspective, Bill S-209 still relies on technologies that raise both privacy and accuracy concerns and puts government into the business of evaluating those technologies. Based on the analysis from regulators around the world, the mandated implementation of these technologies appears premature at best. For example, the Office of the Privacy Commissioner of Canada conducted a consultation last year on the issue of such technologies, identifying three key categories of harms that age assurance technologies are meant to remedy or that they may cause.
A harm that proponents of these technologies want to mitigate is “the extent of youths’ exposure to sexually explicit material online, the frequency with which this material is of an aggressive or violent nature, and the potential harms to body image or mental health it may cause”. Opponents note that these technologies are harmful in that they would limit young people’s (especially those from marginalized groups) “access to online content or forums” that provide “avenues for community-building, civic engagement, and education”, as well as “self-discovery”. Further, use of such technologies comes with risk of data breaches that would publicly expose people’s online activities, consequently causing “psychological or physical harms” and possibly discouraging them from “operating freely in the digital environment”.
In light of these risks, the OPC emphasized the importance of “ensuring that any use of age assurance is proportionate to the risk being addressed”. It intends to pursue further consultation to issue guidance on when age assurance should be used and how to build privacy protections into the design of age assurance techniques.
The challenge of implementing these technologies have been raised elsewhere. In February 2025, the European Data Protection Board issued a Statement on Age Assurance that establishes ten principles to design GDPR-compliant age assurance, in order to “reconcile the protection of children and the protection of personal data”. Neither current Canadian privacy law nor Bill S-209 fully address these principles.
The Australian government commissioned “an age assurance trial to examine options to protect children from harmful content such as pornography and other online age-restricted services, as well as harms on social media”, which will serve to guide its decision-making in November 2024. One element of the trial was to “evaluate the potential impact of different age assurance technologies on user privacy”. Results of the trial have not yet been released.
While interest in age verification technologies continues to grow, there remain significant privacy and freedom of expression concerns. For Canadians, the potential framework contained in Bill S-209 would heighten the risks with limited safeguards and an uncertain regulatory enforcement framework. Further study and assurances of privacy and expression safeguards are essential before even considering moving ahead with mandating risky age verification technologies.
As the trade body for age verification providers, we were not concerned by the EDPB’s guidance – as an industry we can meet all of its requirements.
Estimation tools can never work perfectly at the margins – i.e. to distinguish a 17 year-old from an 18 year-old. They are really useful for those of us who are a few years above the legal age – say 23 for a law applying at 18 – as if we are estimated to be 23+, statistically there is a vanishingly small chance we are, in fact, still below 18.
You could choose to set the test for 18, but would have to accept a large number of false positives – 16, 17 year olds passing. In the old days, that was a lucky break when you walked into a bar and looked old enough to be served, but its unlikely such an imprecise approach will be politically acceptable, so “buffer” ages will be required as described above.
The UK Online Safety Act uses both blocking and the removal of critical business services such as payments in its suite of enforcement measures. But these are intended to be punitive until a site comes into compliance not a general attack on free speech. Access is restored when you obey the law.
And this debate needs to also consider the emerging interoperable, tokenized double-blind solutions for highly efficient and likely cheaper persistent anonymised age assurance e.g. https://avpassociation.com/interoperability-through-ageaware-from-euconsent/
The digital policy debate in Canada has been littered with good intentions, followed by poorly thought-out solutions with little thought to overall consequences. This one appears to again fit that mold.
The use of blocking as enforcement needs to be weighed against the costs and efforts needed to implement any mandated solution. If those costs are too high, then the likely response is to either move or sell the relevant parts of the service to a jurisdiction without comparable enforcement. Unless you can make age verification all of private, trustworthy, and – completely free to implement! – organizations will always face that decision.
The challenge at that point is that you now depend on blocking. Although the legislation envisions having “the effect of preventing persons in Canada from being able to access material other than pornographic material made available by the organization”, the difficulty is that blocking depends on seeing into the protected communications channel. Increasingly all of the steps in the chain from your ISP, through every layer in this country can not be aware of either the source or content of a communication. Tools like DoH and DoT keep any knowledge of accessed websites limited to only the device accessing the website. Tools like CloudFlare’s AnyCast mean that one website is connected on hundreds of IP addresses, and those hundreds of IP addresses are shared with hundreds of other sites. Blocking cannot function in this environment, as any attempt would either have no effect, or would block many, many sites, most being completely unrelated.
Communication protection is being taken in a large part because many totalitarian entities want to break into the channel, for political and surveillance reasons. These protections have the goal of being robust against nation-states. An ISP or telecoms carrier has essentially no chance of intruding into the communication channel, по matter how important the cause.
A more workable solution is for the device itself to be aware of content restrictions, and limit content on the device. That is the only location where the content can for certain be examined in our jurisdiction. Requiring providers to include tagging that clearly indicates a site is only for adults has been available for years (see rtalabel.org), and enables this checking in a robust manner.
Workable legislation might take the form of requiring devices to ship with “restricted to adults” limitations in place, and clearing those restrictions on the device then enables adult access. As there is no association between the restriction removal and any specific website or type of content, there are far fewer privacy concerns.
Trying to solve a problem one website at a time with relatively complex solutions that trigger problematic external consequences? As the ongoing debates over past Canadian digital policy steps show, that only adds huge delays in getting to effective and in-use solutions.
Thanks Chris
You make some good points but I will add some context…
I doubt blocking will be the most effective means of enforcement. Germany blocked one site – let’s call it de.xxx.com and the following day deu.xxx.com had replaced it and somehow all the traffice was automatically diverted from the old to the new. A cat and mouse game – so much better to just tell Visa and Mastercard they can’t take payments for that site, Amazon it can’t host it, and Google it can’t host search results that include it.
The device based options – app stores or operating systems – are much discussed. Apple and Google are offering contributions but are careful to disclaim any responsibility for the age information they help transmit, from a credential in the Google wallet or a parentally set age in iOS. So for services with a legal requirement to apply an age-restriction, there’s no liability chain.
We hope the AgeAware solution cracks that problem, maintaining contractual relationships between age verification providers and their clients while using reusable tokens persistent on a device for a limited period of time, so they remain attached to the user not the machine.
“(f) destroys any personal information collected for age-verification or age-estimation purposes once the verification or estimation is completed”
You mean destroyed AFTER the info has been sent as a copy to three-and-four-letter-agencies of the Five Eyes network.
Iain;
If a site is of the type that takes payments, then they are already more likely to have the financial resources to consider spending on age blocking. There remain sites, however, that depend only on advertising, likely running on much smaller budgets. They don’t need payments, and awareness of the site alone bypasses a need for a search engine. Finally – this is still legal content. Trying to get such a site banned from a hosting provider will likely be a much tougher challenge, especially if there remain jurisdictions where age blocking is not required.
Note that my key point above was that there were likely ways to sidestep liability and have the site remain in operation. As long as there have been legal liability requirements, there have been jurisdictions (yes, countries) that have found it worthwhile to avoid imposing those same requirements. Think Liberian vessel registration.
Finally, Canada is creating it’s own new class of risks for age verification providers, when Bill C-2 proposes that law enforcement can undertake warrentless inquiries of almost any electronic service. Age enforcement would need to provide clear evidence that it never has any personal information at all – not address, name, or any numbers, and especially not any information that links any form of identity to any website. Ever. Not even for a moment. The risk will be that they can be secretly required to hold information they would normally delete, and end up working against the privacy interests of both the websites and their users.
The use of anonymized tokens is not the problem – it is that somewhere in the chain, there is a party to whom they are not anonymized. So – maybe they can do age verification with a fully anonymous user. Or, maybe, if we had Europe’s stronger privacy protections (GDPR…) then this risk would be mitigated that way. But as it stands, the developing Canadian rules around searches risk making an age verification partner into a risk for any company who contracts with them.
If you are going to tout the value of the privacy of a separate and legally mandated age verification service, then you need to also make clear that the same legal mandates must protect privacy to the highest degree. It is a good starting point that “we believe your data is your property and support your right to privacy and transparency”, but if the legal structures can require you to secretly undermine that claim, you have a big challenge ahead.
The discussion around mandatory age verification technologies raises crucial concerns about privacy and freedom of expression. While the intention to protect young users is commendable, the potential risks associated with inaccuracies and data breaches cannot be overlooked. Reflecting on my own experiences with online platforms, like playing arkanoid, I understand the importance of balancing safety with accessibility. Any measures taken mustn’t inadvertently limit opportunities for self-discovery and community engagement among youth.