After years of delay, the government tabled Bill C-63, the Online Harms Act, earlier today. The bill is really three-in-one: the Online Harms Act that creates new duties for Internet companies and a sprawling new enforcement system, changes to the Criminal Code and Canada Human Rights Act that meet longstanding requests from groups to increase penalties and enforcement against hate but which will raise expression concerns and a flood of complaints, and expansion of mandatory reporting of child pornography to ensure that it includes social media companies. This post will seek to unpack some of the key provisions, but with a 100+ page bill, this will require multiple posts and analysis. My immediate response to the government materials was that the bill is significantly different from the 2021 consultation and that many of the worst fears – borne from years of poorly thought out digital policy – have not been realized. Once I worked through the bill itself, concerns about the enormous power vested in the new Digital Safety Commission, which has the feel of a new CRTC funded by the tech companies, began to grow.
At a high level, I offer several takeaways. First, even with some of the concerns identified below, this is better than what the government had planned back in 2021. That online harms consultation envisioned measures such as takedowns without due process, automated reporting to law enforcement, and website blocking. Those measures are largely gone, replaced by an approach that emphasizes three duties: a duty to act responsibly, duty to make certain content inaccessible, and a duty to protect children. That is a much narrower approach and draws heavily from the expert panel formed after the failed 2021 consultation.
Second, there are at least three big red flags in the bill. The first involves the definitions for harms such as inciting violence, hatred, and bullying. As someone who comes from a community that has faced relentless antisemitism and real threats in recent months, I think we need some measures to combat online harms. However, the definitions are not without risks that they may be interpreted in an over broad manner and have implications for freedom of expression. Second – related to the first – is the incredible power vested in the Digital Safety Commission, which will have primary responsibility for enforcing the law. The breadth of powers is remarkable: rulings on making content inaccessible, investigation powers, hearings that under certain circumstances can be closed to the public, establishing regulations and codes of conduct, and the power to levy penalties up to 6% of global revenues of services caught by the law. There is an awful lot there and questions about Commission oversight and accountability will be essential. Third, the provisions involving the Criminal Code and Canadian Human Rights Act require careful study as they feature penalties that go as high as life in prison and open the door to a tidal wave of hate speech related complaints.
Third, this feels like the first Internet regulation bill from this government that is driven primarily by policy rather than implementing the demands of lobby groups or seeking to settle scores with big tech. After the battles over Bills C-11 and C-18, it is difficult to transition to a policy space where experts and stakeholders debate the best policy rather than participating the consultation theatre of the past few years. It notably does not include Bill S-210 style age verification or website blocking. There will need to be adjustments in Bill C-63, particularly efforts to tighten up definitions and ensure effective means to watch the watchers, but perhaps that will come through a genuine welcoming of constructive criticism rather than the discouraging, hostile processes of recent years.
Now to the bill with a mini FAQ.
Which services are caught by the bill?
The bill covers social media services, defined as “a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content.” The Act adds that this includes adult content services and live streaming services. The service must meet a certain threshold of users in Canada for the law to apply (the threshold to be determined).
What duties do the these services face?
As noted above, there are three duties: a duty to act responsibly, duty to make certain content inaccessible, and a duty to protect children. The duty to act responsibly is the most extensive and it focuses on “measures that are adequate to mitigate the risk that users of the service will be exposed to harmful content on the service.” The Digital Safety Commission will be empowered to rule on whether companies have met this duty. Requirements including offering the ability to block users and flag content. The services must maintain available contacts and submit a digital safety plan to the Commission for review. There are detailed rules on what must be included in the plan. The services must also make their data available to researchers, which can be valuable but also raises potential privacy and security risks. The Commission would be responsible for accrediting researchers.
A duty to make certain content inaccessible focuses on two kinds of content: content that sexually victimizes a child or revictimizes a survivor or intimate content communicated without consent. The service must respond to flagged content and render it inaccessible within 24 hours. There is a notification and review process that follows.
A duty to protect children requires services to “integrate into a regulated service that it operates any design features respecting the protection of children, such as age appropriate design, that are provided for by regulations.” There few details available at this stage in the legislation about what this means.
What harms are covered by the bill?
There are seven: sexually victimizing children, bullying, inducing child to harm themselves, extremism/terrorism, inciting violence, fomenting hatred, intimate content without consent including deep fakes.
How are these defined?
The definitions are where there may concerns in some instances. They are as follows:
Intimate content communicated without consent. This involves visual recordings involving nudity or sexually explicit actiivty where the person had a reasonable expectation of privacy and did not consent to the communication of the recording.
content that foments hatred means content that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination, within the meaning of the Canadian Human Rights Act, and that, given the context in which it is communicated, is likely to foment detestation or vilification of an individual or group of individuals on the basis of such a prohibited ground.
Note that content that foments hatred, content does not express detestation or vilification solely because it expresses disdain or dislike or it discredits, humiliates, hurts or offends.
content that incites violence means content that actively encourages a person to commit – or that actively threatens the commission of – an act of physical violence against a person or an act that causes property damage, and that, given the context in which it is communicated, could cause a person to commit an act that could cause
(a) serious bodily harm to a person;
(b) a person’s life to be endangered; or
(c) serious interference with or serious disruption of an essential service, facility or system.
content that incites violent extremism or terrorism means content that actively encourages a person to commit – or that actively threatens the commission of – for a political, religious or ideological purpose, an act of physical violence against a person or an act that causes property damage, with the intention of intimidating or denouncing the public or any section of the public or of compelling a person, government or domestic or international organization to do or to refrain from doing any act, and that, given the context in which it is communicated, could cause a person to commit an act that could cause
(a) serious bodily harm to a person;
(b) a person’s life to be endangered; or
(c) a serious risk to the health or safety of the public or any section of the public.
content that induces a child to harm themselves means content that advocates self-harm, disordered eating or dying by suicide or that counsels a person to commit or engage in any of those acts, and that, given the context in which it is communicated, could cause a child to inflict injury on themselves, to have an eating disorder or to die by suicide.
content used to bully a child means content, or an aggregate of content, that, given the context in which it is communicated, could cause serious harm to a child’s physical or mental health, if it is reasonable to suspect that the content or the aggregate of content is communicated for the purpose of threatening, intimidating or humiliating the child.
content that sexually victimizes a child or revictimizes a survivor is a very long definition that includes multiple visual representations.
These are all obvious harms. The challenge will be to ensure that there is an appropriate balance between freedom of expression and safeguarding agains such harms. There are clearly risks that these definitions could chill some speech and a close examination of each definition will be needed.
How will the law be enforced?
This is the biggest red flag in the bill in my view. Enforcement lies with the new Digital Safety Commission, a new entity appointed by government with between three and five commissioners, including a Chair and Vice-Chair. The Commission’s powers are incredibly broad ranging. It can issue rulings on making content inaccessible, conduct investigations, demand any information it wants from regulated services, hold hearings that under certain circumstances can be closed to the public (the default is open), establish regulations and codes of conduct, issue compliance order, and levy penalties up to 6% of global revenues of services caught by the law for compliance violations. Failure to abide by Commission orders can result in penalties of up to 8% of global revenues. The scope of the regulations cover a wide range of issues.
The law says the Commission must consider privacy, freedom of expression, and equality rights, among other issues. Despite those powers, the Commission is not subject to any legal or technical rules of evidence, as the law speaks to acting informally and expeditiously, an approach that seems inconsistent with its many powers.
In addition to the Commission, there are two other bodies: the Digital Safety Ombudsperson, who is responsible for supporting users, and the Digital Safety Office, which supports the Commission and Ombudsperson.
Who pays for all this?
Potentially the tech companies. The Act includes the power to establish regulations that would require the services caught by the Act to fund the costs of the Commission, Ombudsperson, and Office.
What about the Criminal Code and Human Rights Act provisions?
There are several new provisions designed to increase the penalties for online hate. This includes longer potential prison terms under the Criminal Code, including life in prison for advocating or promoting genocide. There are also expanded rules within the Canada Human Rights Act that opens the door to an influx of complaints on communicating hate speech (note that this does not include linking or private communications) with penalties as high as $20,000. These provisions will likely be a lightning rod over concerns about the chilling of speech and overloading the Human Rights Commission with online hate related complaints.
And the mandatory reporting of child pornography?
These provisions expand the definition of Internet services caught by the reporting requirements.
The utter lack of due process in terms of actions by the DSC should make this a non-starter, as a bunch of officials appointed by the government of the day are given the power to effectively operate as a pseudo-court minus the restrictions and safeguards that courts afford. One could certainly envision a government appointing its own partisans with the intention of targeting online speech that said government finds personally distasteful, knowing that many people lack the ability to appeal these judgments.
F for the extreme lack of independent judicial due process and F- for the government continuing to pretend that this isn’t a censorship tool that it can use to silence speech it disagrees with.
My initial reaction when I read about the tech companies being required to fund the DSC was that I reminded of Donald Trump’s 2016 campaign promise to build a wall on the border with Mexico and that Mexico would be made to pay for it.
The issue about the DSC being appointed by the government is a issue for me as well. While in theory the DSC may operate independently of the government, the government has the ability to control the agenda of the DSC by selecting commissioners who are in line with government priorities, and if commissioners are eligible for extension in the position, are incentivized to go along with the governments wishes if they want to be extended.
I also agree the ability for the DSC to levy fines, etc, is an issue; I would prefer a setup where the DSC can recommend charges to the Public Prosecution Service of Canada and then it proceeds through the courts. This gets around the issue of the DSC being accuser, judge, jury and executioner with respect to the Online Harms Act, in the same way that Elections Canada is with respect to the Canada Elections Act (although in the case of the latter, they also investigate issues where Elections Canada itself is the target of the complaint).
Don’t we already have laws against all these things online or offline.
It’s funny that all these things are very different things.
How will tech companies tell who is Canadian
I just don’t understand how this will all work
I think you should focus on your last statement, and return when you no longer feel it’s true.
Basically we do, but that hasn’t stopped the current government before. If you look at the list of firearms that were made prohibited a few years ago, many were because it was possible (although maybe not economical or useful) to convert them to full automatic. However section 102.1 of the Criminal code of Canada has been on the books since 1985 dealing with this and stipulates, on conviction of an indictable offence, a prison sentence of between 1 and 10 years.
It’s not as bad as it might have been. It leaves out most of the more wacky stuff that came out of the consultation process. But it’s not that they had better ideas — they simply leave it all up to the future Digital Safety Commission to decide what kind of measures it thinks are appropriate under sections 56 and 65.
The people who inserted the telling phrase “age appropriate design” in there have apparently recognized that what they want is not possible for a democratic government to directly impose, and hope to get their way through the Commission instead.
(trying again, your blog seems overloaded. sorry if comment is a duplicate)
What rules will there be to prevent the complaints process from being abused? From being used to harass, intimidate, and censor people and organizations. For example, what would happen if Unifor used a bot to flood Facebook with frivolous complaints. Or one political party ran a campaign encouraging its members to file complaints against another party. Or a jealous ex harassed their ex by filing false complaints.
It’ll still be used to police speech.
“Hate speech” may include “genocide denial”, which will include questioning the evidence for mass graves, etc.
“Hate speech” could also include simple statements like saying that some communities commit crimes at higher rates than others, etc.
And another party coming into power might define “hate speech” as including various statements made by DEI advocates or about white people, and seek to censor those as “hate speech”.
Unless the bill explicitly says somewhere that “truth is an absolute defence”, you can expect this to be abused for political purposes. Probably by both sides.
This is an Orwellian nightmare. Shame on any Canadian who does anything but condemn it unequivocally. If social media is harming children (and it IS) then get children off social media. Trudeau is doubling down on tyranny by calling all speech that he hates hate speech and then criminalizing it.
“Freedom includes the right to say what others may object to and resent. . . The essence of citizenship is to be tolerant of strong and provocative words.
I am a Canadian, free to speak without fear, free to worship in my own way, free to stand for what I think right, free to oppose what I believe wrong, or free to choose those who shall govern my country. This heritage of freedom I pledge to uphold for myself and all mankind.” – John G Diefenbaker
Actual research is pretty overwhelming that social media is NOT harming kids. Techdirt has a decent overview at https://www.techdirt.com/2023/12/18/yet-another-massive-study-says-theres-no-evidence-that-social-media-is-inherently-harmful-to-teens/
Agreed. It feels like the government is trying to do the job of parents so parents don’t have to parent their children’s online activities.
And I have no doubt that this will be used to quell opposing views.
I’m getting 88 D0llars consistently to deal with net. Q I’ve never accepted like it tends to be reachable anyway one of my most noteworthy buddy got D0llars 27,000 D0llars in three weeks working this basic task and she impacted me to avail…
Take A Look Here….> https://DollarDealTasks73.blogspot.com/
It has been demonstrated often in recent years that feelings trump truth in these kangaroo courts … Truth is no defense if it brings so called protected groups into disrepute., even if the government itself collects the evidence , making it available to the public can be construed as fomenting hate .
What’s the definition of hate speech?
Within this bill:
Definition of hate speech
(8) In this section, hate speech means the content of a communication that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.
Clarification – hate speech
(9) For greater certainty, the content of a communication does not express detestation or vilification, for the purposes of subsection (8), solely because it expresses disdain or dislike or it discredits, humiliates, hurts or offends.
Note that it appears that the defence as specified in Section 319 (3)a of the Criminal Code of Canada does not apply.
This was in response to Millennial 1%er.
The world’s [best mini chainsaw cordless](https://hardell.com/blogs/applications/best-mini-chainsaw),Hardell mini chainsaw, is not only lightweight but also supports left-hand use. If you want to have 4 inch mini chainsaws and 6 inch mini chainsaws, then choose the Hardell mini chainsaw, which has 4 inch and 6 inch guides.
Online Earnings $1280 per Day. A social media marketer promotes a product or a business through social media platforms. A social media marketer must understand how the social media platforms eO such as Facebook and Twitter provide and promote content to their subscribers.
GO >>>>>>>>> https://Profit3Revenue3.blogspot.com
Pingback: Terry Glavin: Under hate speech bill, wouldn't Trudeau be guilty of vilifying Catholics? - Freshsociety
Pingback: The incredibly harmful Online Harms Act « Quotulatiousness
I am feeling it is important to scrub my entire online existence after now that this is the law. Jews all over Canada are falsely accused of hate crimes and silenced through bans, blocking, or online bullying.
Considering the UN couldn’t decide if Rwanda was or wasn’t a “genocide” how is that even a specific item? The reason is very obvious for those constantly accused of having dual loyalty rather than being treated like Canadians.
Life in prison for a word that has been diluted to serve almost ever purpose, meanwhile there are racist, prejudice, and antisemetic words/phrases that should absolutely be included.
Hate speech isn’t a universal word. People make up specific hurtful words for each demographic.
This isn’t online protection, it’s online intimidation.
Pingback: Terry Glavin: Under hate speech bill, wouldn't Trudeau be guilty of vilifying Catholics? - Fastexecute
Pingback: #AxisOfEasy 339: Unveiling The Era Of AI Deception: The Alarming Ease Of Falling Victim To Scams – AxisOfEasy
Your blog has significantly expanded my knowledge, and by delving deeper into older video games, you can enhance your gaming experiences. It has been a pleasure to play with you in the past.
Pingback: TikTok On The Chopping Block? - Six Pixels of Separation
Pingback: Opinion: Requiring age-verification for porn won’t save children from online harm. But it will invade our privacy - RavensGrid
With a bill that is over a hundred pages long, this post will attempt to break down some of the most important aspects; nevertheless, further posts and analysis will be needed.