Meta CEO Mark Zuckerberg yesterday announced significant new changes to the company’s content moderation policies. The five-minute video is worth watching in its entirety, as it demonstrates the shifting political sands that seemingly pressured even the world’s largest social media company to pay heed. Zuckerberg said the company’s reliance on third-party fact checkers had resulted in too much censorship and vowed to return to an emphasis on freedom of expression. That means the fact checkers are gone, replaced by the Twitter (X) model of community notes. Moreover, the company is moving its content moderation team from California to Texas (a nod to claims the California-based teams were biased), increasing the amount of political content in user feeds, and pledging to work with the Trump administration to combat content regulation elsewhere, including in Europe and South America.
With more than three billion users, the implications of the decision are enormous assuming the same approach is taken in all markets (the company is starting in the U.S.). But beyond what it means for Facebook and Instagram users, the change is likely part of a broader shift in Internet regulation with the pendulum swinging back toward lighter touch rules coming out of the United States. In other words, the recent experience on Twitter that has left many uncomfortable may become the norm, not the outlier.
In thinking about the decision, I reached back to my appearance six weeks ago before the Standing Committee on Canadian Heritage as part of its study on protecting freedom of expression. My opening statement included the following:
I think Bill C-11 and Bill C-18 both have indirect effects on expression. In the case of Bill C-11, supporters were far too dismissive of the implications of regulating user content, with some going so far as to deny it was in the bill, only to later issue a policy direction that confirmed its presence.
Bill C-18 not only led to the blocking of news links but also failed to recognize that linking to content is itself expression. The net effect has been to cause harm to news-related expression in Canada. We need to do better when it comes to digital policy, as we haven’t always taken the protection of expression sufficiently seriously in the digital policy debate.
Second, there is expression that chills other expression. This can occur when expression includes harassment or strikes fear in some communities, invariably leading to a chill in their ability to express themselves. My own community, the Jewish community, is a case in point. The rise in anti-Semitism, in a manner not seen in Canada in generations, has sparked safety fears and chilled expression.
The committee was prescient in addressing freedom of expression issues, though its study will never see the light of day given the government’s decision to prorogue Parliament. While Meta’s changes seem largely driven by political considerations, some of the concerns Zuckerberg identifies are real. In the Canadian context, the government was too dismissive of the speech implications of the online streaming and online news bills. As a result, Facebook has blocked news links in Canada for more than 18 months, a policy that is a direct result of government legislation and which harms freedom of expression. In fact, the government doubled down on the policy by urging the CRTC to review user screenshots of news, thereby also targeting the speech of millions of Canadians. Those bills – when combined with the now-dead Bill C-63 on online harms – are genuine concerns and require a rethink that better centres freedom of expression.
However, my comments also sought to emphasize that fixing bad legislation is not the same as rejecting all legislation or efforts to address speech that can cause harm. Community notes has been a valuable innovation, but it does not replace actual efforts to identify illegal or harmful content. There is a need for platforms to act responsibly (to borrow the language of Bill C-63) and that includes mitigating against real harms with appropriate policies, transparency, and a consistent application of their own rules. If they are unwilling to do so, legislation and potential liability is needed.
The experience of the Jewish community is a case in point. While some content rises to the level of illegality, much of the barrage of antisemitic content online falls within the awful but lawful category. This is therefore legal content that causes serious harms as the numerous anti-semitic attacks in Canada amply demonstrate. It is important to emphasize that community moderation on sites like Twitter or Wikipedia do not solve these issues and in some instances may make matters worse.
This week feels like the start of a new era on Internet policy. In the U.S., it seems likely that efforts to curry favour with the Trump Administration will not end with Meta and that many other companies are likely to follow a similar approach. In Canada, the online harms bill is dead and changes may be coming for the digital policies that are now law. In fact, U.S. pressure to change those laws may be on the agenda. There is a need for a policy correction, but this new era also brings significant risks. Many of the policies were born out of legitimate concerns for the consequences of harmful disinformation. Community notes alone won’t solve the issue and left unchecked the results may chill the very speech the companies profess to support. The last few years have been marked by digital policies that were too dismissive of expression risks and too quick to paint critics as anti-regulation when many were simply urging smarter regulation that did a better job of striking the balance between competing objectives. In the emerging new era of Internet regulation, that should remain our goal.
Pingback: Law and Media Round Up – 13 January 2024 – Inforrm's Blog
nice admin thanks blog.
Part of the issue that I see with the concept of content moderation is that the very concept of free speech seems to be that there are places where rights such as free speech are considered to be far more absolute (for instance, the US) that they are in Canada (thanks to sections 1 and 33 of the Charter). How does this, then, apply to multinational social media? The approach that I see Meta taking is toward minimal moderation; rather than proactively taking down or reducing the visibility of posts to protect people from seeing stuff that they may not agree with it depends on actual complaints.
As an example of what was happening, both Twitter and Facebook were caught, and Zuckerberg has subsequently admitted, that the third parties that were employed would suppress posts that went against the checkers politics, even if they would otherwise be acceptable. For instance, early in the COVID-19 pandemic the “fact checkers” would actively suppress people, including communicable disease scientists, posting theories about a Chinese lab leak as being the source, in order to push the government approved theory of the Wuhan wet market.
The idea of moving the content moderation team to Texas seems to be in line with minimal moderation, in that the people of Texas are far more libertarian state than California is.
Geez, that came across as confusing due to some missed edits. Lets try again.
Part of the issue that I see with the concept of content moderation is that the very concept of free speech seems to be inconsistent, in that there are places where rights such as free speech are considered to be far more absolute (for instance, the US) than they are in Canada (thanks to sections 1 and 33 of the Charter). How does this, then, apply to multinational social media? The approach that I see Meta taking is toward minimal moderation; rather than proactively taking down or reducing the visibility of posts to protect people from seeing stuff that they may not agree with, the process now depends on actual complaints.
As an example of what was happening, both Twitter and Facebook were caught, and Zuckerberg has subsequently admitted, that the third parties that were employed would suppress posts that went against the fact checkers politics, even if they would otherwise be acceptable. For instance, early in the COVID-19 pandemic the “fact checkers” would actively suppress people and/or posts, including from communicable disease scientists, posting theories about a Chinese lab leak as being the source, in order to push the government approved theory of the Wuhan wet market.
The idea of moving the content moderation team to Texas seems to be in line with the idea of minimal moderation, in that the people of Texas are far more libertarian than the people of California are.
It effectively highlights the complexities of balancing free speech with the need to mitigate harm, particularly in the context of Meta’s recent policy changes. While the shift toward community moderation and lighter regulation may seem like a win for free expression, the article underscores the risks of inadequate safeguards against harmful content. The discussion on Canadian legislation further enriches the analysis by illustrating the tension between protecting expression and addressing harmful disinformation. Overall, it emphasizes the need for thoughtful, balanced digital policies in an era of rapid change.
I think I can see where you are coming from, but I have concerns about some of the items that you highlight.
What constitutes harm, and harmful? And are we talking about potential harm or actual harm, or is it that some don’t want to be exposed to ideas that fall outside of their viewpoint?
Similarly, what constitutes disinformation? An example that I have used in the past is that the earth is round. 600 or so years ago common and conventional knowledge was that the earth is flat, and the idea that the earth is round was, in today’s vernacular, disinformation. Today we know that the earth is in fact round (well, an oblate spheroid). In today’s environment, one of the things that was deemed as disinformation in some places was that the COVID-19 vaccines had side effects, and this information was suppressed. However it was in fact known at the time that there were some side effects. This was deemed disinformation as it went against the official government line which was geared towards getting more people vaccinated, even if it meant that they couldn’t make an informed decision on getting the jab.
It effectively highlights the tension between fostering open dialogue and addressing harmful content, drawing on both U.S. and Canadian contexts to illustrate the challenges of regulating digital speech. The critique of legislative shortcomings, such as those in Canadian digital policies, is well-argued, particularly in addressing unintended consequences like chilling expression or enabling harmful content. However, the piece could benefit from a clearer roadmap for achieving the proposed “smarter regulation” to balance expression and harm mitigation, especially in a rapidly evolving digital landscape.
such as those in Canadian digital policies, is well-argued, particularly in addressing unintended consequences like chilling expression or enabling harmful content. However, the piece could benefit from a clearer roadmap for achieving the proposed “smarter regulation” to balance expression and harm mitigation, especially in a rapidly evolving digital landscape.
soryy nice blogs admin thnaks pls
It effectively highlights the complexities of balancing free speech with the need to mitigate harm, particularly in the context of Meta’s recent policy changes. While the shift toward community moderation and lighter regulation may seem like a win for free expression, the article underscores the risks of inadequate
As someone who had their social media account suspended for mistakenly flagged content, I’m eager to see how Meta’s reforms will balance freedom of expression with the need to protect users from online harm.
As someone who had their social media account suspended for mistakenly flagged content, I’m eager to see how Meta’s reforms will balance freedom of expression with the need to protect users from online harm.
In other words, the unsettling experiences of Portland Excavation Pros and many have recently encountered on Twitter could soon shift from being an exception to a new standard.
Your post is very good and offers practical advice. This is a very good post that’s well-structured and informative. I appreciate the effort you’ve put into making this content valuable!
job aim
This article raises some great points! It seems Meta’s content moderation changes are largely politically driven. On one hand, it’s good to see more focus on free speech, but these changes might leave harmful content like hate speech unchecked. We need to ensure free speech doesn’t come at the cost of safety.
I disagree with the idea that Meta’s content moderation changes are a step forward. Removing fact-checkers and relying on community-driven notes could lead to more misinformation and hate speech. We need stronger, not weaker, regulation to ensure online spaces are safe and accountable, especially when dealing with harmful content.
In the modern era of digital transformation, businesses in Chennai are increasingly relying on web applications for operations, customer engagement, and e-commerce. However, with Web Application Penetration Testing in Chennai growing digital presence comes the risk of cyber threats. Web Application Penetration Testing (WAPT) has become essential to identify and address security vulnerabilities before malicious hackers can exploit them.